Test Report: KVM_Linux_crio 21409

                    
                      432f5d8b8de395ddce63f21c968df47ae82ccbe6:2025-10-18:41964
                    
                

Test fail (14/324)

x
+
TestAddons/parallel/Registry (74.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.594977ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1018 14:11:17.731086 1759792 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 14:11:17.731115 1759792 kapi.go:107] duration metric: took 10.993275ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005721857s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0053445s
addons_test.go:392: (dbg) Run:  kubectl --context addons-891059 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-891059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Non-zero exit: kubectl --context addons-891059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.085215494s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted from default namespace

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:399: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-891059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:403: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted from default namespace
*
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 ip
2025/10/18 14:12:28 [DEBUG] GET http://192.168.39.100:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-891059 -n addons-891059
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 logs -n 25: (1.459003431s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ --download-only -p binary-mirror-305392 --alsologtostderr --binary-mirror http://127.0.0.1:39643 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ -p binary-mirror-305392                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ addons  │ enable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ start   │ -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ ip      │ addons-891059 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:38.383524 1760410 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:38.383797 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383806 1760410 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:38.383810 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383984 1760410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:08:38.384564 1760410 out.go:368] Setting JSON to false
	I1018 14:08:38.385550 1760410 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21066,"bootTime":1760775452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:38.385650 1760410 start.go:141] virtualization: kvm guest
	I1018 14:08:38.387370 1760410 out.go:179] * [addons-891059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:08:38.388598 1760410 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:08:38.388649 1760410 notify.go:220] Checking for updates...
	I1018 14:08:38.390750 1760410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:38.391832 1760410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:38.392857 1760410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:38.393954 1760410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:08:38.395387 1760410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:08:38.397030 1760410 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:08:38.428089 1760410 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 14:08:38.429204 1760410 start.go:305] selected driver: kvm2
	I1018 14:08:38.429233 1760410 start.go:925] validating driver "kvm2" against <nil>
	I1018 14:08:38.429248 1760410 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:08:38.429988 1760410 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.430081 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.444435 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.444496 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.459956 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.460007 1760410 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:08:38.460292 1760410 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:08:38.460324 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:08:38.460395 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:08:38.460407 1760410 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 14:08:38.460458 1760410 start.go:349] cluster config:
	{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 14:08:38.460561 1760410 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.462275 1760410 out.go:179] * Starting "addons-891059" primary control-plane node in "addons-891059" cluster
	I1018 14:08:38.463616 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:08:38.463663 1760410 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:08:38.463679 1760410 cache.go:58] Caching tarball of preloaded images
	I1018 14:08:38.463782 1760410 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:08:38.463797 1760410 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:08:38.464313 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:38.464364 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json: {Name:mk7320464dda7a1239a5641208a2baa2eb0aeb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:08:38.464529 1760410 start.go:360] acquireMachinesLock for addons-891059: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 14:08:38.464580 1760410 start.go:364] duration metric: took 35.666µs to acquireMachinesLock for "addons-891059"
	I1018 14:08:38.464596 1760410 start.go:93] Provisioning new machine with config: &{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:08:38.464647 1760410 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 14:08:38.467259 1760410 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 14:08:38.467474 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:08:38.467524 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:08:38.481384 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1018 14:08:38.481876 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:08:38.482458 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:08:38.482488 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:08:38.482906 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:08:38.483171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:38.483408 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:38.483601 1760410 start.go:159] libmachine.API.Create for "addons-891059" (driver="kvm2")
	I1018 14:08:38.483638 1760410 client.go:168] LocalClient.Create starting
	I1018 14:08:38.483679 1760410 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem
	I1018 14:08:38.745193 1760410 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem
	I1018 14:08:39.239522 1760410 main.go:141] libmachine: Running pre-create checks...
	I1018 14:08:39.239552 1760410 main.go:141] libmachine: (addons-891059) Calling .PreCreateCheck
	I1018 14:08:39.240096 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:39.240581 1760410 main.go:141] libmachine: Creating machine...
	I1018 14:08:39.240598 1760410 main.go:141] libmachine: (addons-891059) Calling .Create
	I1018 14:08:39.240735 1760410 main.go:141] libmachine: (addons-891059) creating domain...
	I1018 14:08:39.240756 1760410 main.go:141] libmachine: (addons-891059) creating network...
	I1018 14:08:39.242180 1760410 main.go:141] libmachine: (addons-891059) DBG | found existing default network
	I1018 14:08:39.242394 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.242421 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>default</name>
	I1018 14:08:39.242432 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 14:08:39.242439 1760410 main.go:141] libmachine: (addons-891059) DBG |   <forward mode='nat'>
	I1018 14:08:39.242474 1760410 main.go:141] libmachine: (addons-891059) DBG |     <nat>
	I1018 14:08:39.242495 1760410 main.go:141] libmachine: (addons-891059) DBG |       <port start='1024' end='65535'/>
	I1018 14:08:39.242573 1760410 main.go:141] libmachine: (addons-891059) DBG |     </nat>
	I1018 14:08:39.242596 1760410 main.go:141] libmachine: (addons-891059) DBG |   </forward>
	I1018 14:08:39.242607 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 14:08:39.242619 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 14:08:39.242634 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 14:08:39.242645 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.242658 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 14:08:39.242666 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.242673 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.242680 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.242694 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243130 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.242976 1760437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123570}
	I1018 14:08:39.243178 1760410 main.go:141] libmachine: (addons-891059) DBG | defining private network:
	I1018 14:08:39.243193 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243204 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.243216 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.243222 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.243227 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.243234 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.243239 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.243245 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.243249 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.243263 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.243270 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.248946 1760410 main.go:141] libmachine: (addons-891059) DBG | creating private network mk-addons-891059 192.168.39.0/24...
	I1018 14:08:39.319941 1760410 main.go:141] libmachine: (addons-891059) DBG | private network mk-addons-891059 192.168.39.0/24 created
	I1018 14:08:39.320210 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.320231 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.320247 1760410 main.go:141] libmachine: (addons-891059) setting up store path in /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.320262 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>3e7dc5ca-8c6a-4f5a-8f08-752a5d85d27d</uuid>
	I1018 14:08:39.320883 1760410 main.go:141] libmachine: (addons-891059) building disk image from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 14:08:39.320919 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 14:08:39.320937 1760410 main.go:141] libmachine: (addons-891059) Downloading /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 14:08:39.320964 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:80:09:dc'/>
	I1018 14:08:39.320974 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.320985 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.320997 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.321006 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.321013 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.321038 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.321045 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.321061 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.321072 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.320218 1760437 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.610846 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.610682 1760437 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa...
	I1018 14:08:39.691572 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691412 1760437 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk...
	I1018 14:08:39.691603 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing magic tar header
	I1018 14:08:39.691616 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing SSH key tar header
	I1018 14:08:39.691625 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691531 1760437 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.691639 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059
	I1018 14:08:39.691766 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 (perms=drwx------)
	I1018 14:08:39.691804 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines (perms=drwxr-xr-x)
	I1018 14:08:39.691812 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines
	I1018 14:08:39.691822 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.691828 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824
	I1018 14:08:39.691835 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 14:08:39.691839 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins
	I1018 14:08:39.691848 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home
	I1018 14:08:39.691853 1760410 main.go:141] libmachine: (addons-891059) DBG | skipping /home - not owner
	I1018 14:08:39.691897 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube (perms=drwxr-xr-x)
	I1018 14:08:39.691923 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824 (perms=drwxrwxr-x)
	I1018 14:08:39.691940 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 14:08:39.691998 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 14:08:39.692026 1760410 main.go:141] libmachine: (addons-891059) defining domain...
	I1018 14:08:39.693006 1760410 main.go:141] libmachine: (addons-891059) defining domain using XML: 
	I1018 14:08:39.693019 1760410 main.go:141] libmachine: (addons-891059) <domain type='kvm'>
	I1018 14:08:39.693025 1760410 main.go:141] libmachine: (addons-891059)   <name>addons-891059</name>
	I1018 14:08:39.693030 1760410 main.go:141] libmachine: (addons-891059)   <memory unit='MiB'>4096</memory>
	I1018 14:08:39.693036 1760410 main.go:141] libmachine: (addons-891059)   <vcpu>2</vcpu>
	I1018 14:08:39.693040 1760410 main.go:141] libmachine: (addons-891059)   <features>
	I1018 14:08:39.693046 1760410 main.go:141] libmachine: (addons-891059)     <acpi/>
	I1018 14:08:39.693053 1760410 main.go:141] libmachine: (addons-891059)     <apic/>
	I1018 14:08:39.693058 1760410 main.go:141] libmachine: (addons-891059)     <pae/>
	I1018 14:08:39.693064 1760410 main.go:141] libmachine: (addons-891059)   </features>
	I1018 14:08:39.693069 1760410 main.go:141] libmachine: (addons-891059)   <cpu mode='host-passthrough'>
	I1018 14:08:39.693074 1760410 main.go:141] libmachine: (addons-891059)   </cpu>
	I1018 14:08:39.693078 1760410 main.go:141] libmachine: (addons-891059)   <os>
	I1018 14:08:39.693085 1760410 main.go:141] libmachine: (addons-891059)     <type>hvm</type>
	I1018 14:08:39.693090 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='cdrom'/>
	I1018 14:08:39.693095 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='hd'/>
	I1018 14:08:39.693100 1760410 main.go:141] libmachine: (addons-891059)     <bootmenu enable='no'/>
	I1018 14:08:39.693104 1760410 main.go:141] libmachine: (addons-891059)   </os>
	I1018 14:08:39.693134 1760410 main.go:141] libmachine: (addons-891059)   <devices>
	I1018 14:08:39.693159 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='cdrom'>
	I1018 14:08:39.693176 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.693184 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.693194 1760410 main.go:141] libmachine: (addons-891059)       <readonly/>
	I1018 14:08:39.693202 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693215 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='disk'>
	I1018 14:08:39.693225 1760410 main.go:141] libmachine: (addons-891059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 14:08:39.693242 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.693252 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.693259 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693271 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693281 1760410 main.go:141] libmachine: (addons-891059)       <source network='mk-addons-891059'/>
	I1018 14:08:39.693293 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693303 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693324 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693354 1760410 main.go:141] libmachine: (addons-891059)       <source network='default'/>
	I1018 14:08:39.693363 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693367 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693373 1760410 main.go:141] libmachine: (addons-891059)     <serial type='pty'>
	I1018 14:08:39.693396 1760410 main.go:141] libmachine: (addons-891059)       <target port='0'/>
	I1018 14:08:39.693404 1760410 main.go:141] libmachine: (addons-891059)     </serial>
	I1018 14:08:39.693408 1760410 main.go:141] libmachine: (addons-891059)     <console type='pty'>
	I1018 14:08:39.693416 1760410 main.go:141] libmachine: (addons-891059)       <target type='serial' port='0'/>
	I1018 14:08:39.693426 1760410 main.go:141] libmachine: (addons-891059)     </console>
	I1018 14:08:39.693446 1760410 main.go:141] libmachine: (addons-891059)     <rng model='virtio'>
	I1018 14:08:39.693467 1760410 main.go:141] libmachine: (addons-891059)       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.693482 1760410 main.go:141] libmachine: (addons-891059)     </rng>
	I1018 14:08:39.693492 1760410 main.go:141] libmachine: (addons-891059)   </devices>
	I1018 14:08:39.693501 1760410 main.go:141] libmachine: (addons-891059) </domain>
	I1018 14:08:39.693506 1760410 main.go:141] libmachine: (addons-891059) 
	I1018 14:08:39.706650 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:f4:cf:b8 in network default
	I1018 14:08:39.707254 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:39.707274 1760410 main.go:141] libmachine: (addons-891059) starting domain...
	I1018 14:08:39.707286 1760410 main.go:141] libmachine: (addons-891059) ensuring networks are active...
	I1018 14:08:39.707989 1760410 main.go:141] libmachine: (addons-891059) Ensuring network default is active
	I1018 14:08:39.708292 1760410 main.go:141] libmachine: (addons-891059) Ensuring network mk-addons-891059 is active
	I1018 14:08:39.708895 1760410 main.go:141] libmachine: (addons-891059) getting domain XML...
	I1018 14:08:39.709831 1760410 main.go:141] libmachine: (addons-891059) DBG | starting domain XML:
	I1018 14:08:39.709853 1760410 main.go:141] libmachine: (addons-891059) DBG | <domain type='kvm'>
	I1018 14:08:39.709867 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>addons-891059</name>
	I1018 14:08:39.709876 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>372d9231-4fa4-4480-95fc-5052e6676096</uuid>
	I1018 14:08:39.709886 1760410 main.go:141] libmachine: (addons-891059) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 14:08:39.709894 1760410 main.go:141] libmachine: (addons-891059) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 14:08:39.709903 1760410 main.go:141] libmachine: (addons-891059) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 14:08:39.709907 1760410 main.go:141] libmachine: (addons-891059) DBG |   <os>
	I1018 14:08:39.709920 1760410 main.go:141] libmachine: (addons-891059) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 14:08:39.709930 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='cdrom'/>
	I1018 14:08:39.709943 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='hd'/>
	I1018 14:08:39.709954 1760410 main.go:141] libmachine: (addons-891059) DBG |     <bootmenu enable='no'/>
	I1018 14:08:39.709988 1760410 main.go:141] libmachine: (addons-891059) DBG |   </os>
	I1018 14:08:39.710010 1760410 main.go:141] libmachine: (addons-891059) DBG |   <features>
	I1018 14:08:39.710020 1760410 main.go:141] libmachine: (addons-891059) DBG |     <acpi/>
	I1018 14:08:39.710028 1760410 main.go:141] libmachine: (addons-891059) DBG |     <apic/>
	I1018 14:08:39.710042 1760410 main.go:141] libmachine: (addons-891059) DBG |     <pae/>
	I1018 14:08:39.710052 1760410 main.go:141] libmachine: (addons-891059) DBG |   </features>
	I1018 14:08:39.710065 1760410 main.go:141] libmachine: (addons-891059) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 14:08:39.710080 1760410 main.go:141] libmachine: (addons-891059) DBG |   <clock offset='utc'/>
	I1018 14:08:39.710094 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 14:08:39.710106 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_reboot>restart</on_reboot>
	I1018 14:08:39.710116 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_crash>destroy</on_crash>
	I1018 14:08:39.710124 1760410 main.go:141] libmachine: (addons-891059) DBG |   <devices>
	I1018 14:08:39.710141 1760410 main.go:141] libmachine: (addons-891059) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 14:08:39.710157 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='cdrom'>
	I1018 14:08:39.710174 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw'/>
	I1018 14:08:39.710189 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.710202 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.710213 1760410 main.go:141] libmachine: (addons-891059) DBG |       <readonly/>
	I1018 14:08:39.710241 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 14:08:39.710261 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710268 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='disk'>
	I1018 14:08:39.710278 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 14:08:39.710289 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.710297 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.710304 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 14:08:39.710311 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710317 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 14:08:39.710325 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 14:08:39.710331 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710338 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 14:08:39.710353 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 14:08:39.710359 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 14:08:39.710375 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710394 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710417 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:12:2f:9d'/>
	I1018 14:08:39.710440 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='mk-addons-891059'/>
	I1018 14:08:39.710448 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710453 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 14:08:39.710459 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710463 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710469 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:f4:cf:b8'/>
	I1018 14:08:39.710473 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='default'/>
	I1018 14:08:39.710478 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710499 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 14:08:39.710511 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710529 1760410 main.go:141] libmachine: (addons-891059) DBG |     <serial type='pty'>
	I1018 14:08:39.710546 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='isa-serial' port='0'>
	I1018 14:08:39.710558 1760410 main.go:141] libmachine: (addons-891059) DBG |         <model name='isa-serial'/>
	I1018 14:08:39.710568 1760410 main.go:141] libmachine: (addons-891059) DBG |       </target>
	I1018 14:08:39.710575 1760410 main.go:141] libmachine: (addons-891059) DBG |     </serial>
	I1018 14:08:39.710584 1760410 main.go:141] libmachine: (addons-891059) DBG |     <console type='pty'>
	I1018 14:08:39.710590 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='serial' port='0'/>
	I1018 14:08:39.710597 1760410 main.go:141] libmachine: (addons-891059) DBG |     </console>
	I1018 14:08:39.710602 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='mouse' bus='ps2'/>
	I1018 14:08:39.710611 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 14:08:39.710619 1760410 main.go:141] libmachine: (addons-891059) DBG |     <audio id='1' type='none'/>
	I1018 14:08:39.710635 1760410 main.go:141] libmachine: (addons-891059) DBG |     <memballoon model='virtio'>
	I1018 14:08:39.710650 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 14:08:39.710670 1760410 main.go:141] libmachine: (addons-891059) DBG |     </memballoon>
	I1018 14:08:39.710681 1760410 main.go:141] libmachine: (addons-891059) DBG |     <rng model='virtio'>
	I1018 14:08:39.710688 1760410 main.go:141] libmachine: (addons-891059) DBG |       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.710700 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 14:08:39.710714 1760410 main.go:141] libmachine: (addons-891059) DBG |     </rng>
	I1018 14:08:39.710725 1760410 main.go:141] libmachine: (addons-891059) DBG |   </devices>
	I1018 14:08:39.710731 1760410 main.go:141] libmachine: (addons-891059) DBG | </domain>
	I1018 14:08:39.710744 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:41.127813 1760410 main.go:141] libmachine: (addons-891059) waiting for domain to start...
	I1018 14:08:41.129181 1760410 main.go:141] libmachine: (addons-891059) domain is now running
	I1018 14:08:41.129199 1760410 main.go:141] libmachine: (addons-891059) waiting for IP...
	I1018 14:08:41.130215 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.130734 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.130765 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.131111 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.131182 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.131117 1760437 retry.go:31] will retry after 310.436274ms: waiting for domain to come up
	I1018 14:08:41.443955 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.444643 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.444667 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.444959 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.445013 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.444938 1760437 retry.go:31] will retry after 310.095624ms: waiting for domain to come up
	I1018 14:08:41.756412 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.756912 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.756985 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.757237 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.757264 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.757211 1760437 retry.go:31] will retry after 403.034899ms: waiting for domain to come up
	I1018 14:08:42.161632 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.162259 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.162290 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.162631 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.162653 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.162588 1760437 retry.go:31] will retry after 392.033324ms: waiting for domain to come up
	I1018 14:08:42.555954 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.556467 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.556490 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.556794 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.556833 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.556772 1760437 retry.go:31] will retry after 563.122226ms: waiting for domain to come up
	I1018 14:08:43.121698 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.122213 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.122240 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.122649 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.122673 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.122588 1760437 retry.go:31] will retry after 654.00858ms: waiting for domain to come up
	I1018 14:08:43.778430 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.778988 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.779017 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.779284 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.779359 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.779296 1760437 retry.go:31] will retry after 861.369309ms: waiting for domain to come up
	I1018 14:08:44.642386 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:44.642972 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:44.643001 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:44.643258 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:44.643325 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:44.643266 1760437 retry.go:31] will retry after 1.120629341s: waiting for domain to come up
	I1018 14:08:45.765704 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:45.766202 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:45.766225 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:45.766596 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:45.766622 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:45.766568 1760437 retry.go:31] will retry after 1.280814413s: waiting for domain to come up
	I1018 14:08:47.049323 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:47.049871 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:47.049898 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:47.050228 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:47.050287 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:47.050222 1760437 retry.go:31] will retry after 2.205238568s: waiting for domain to come up
	I1018 14:08:49.257773 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:49.258389 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:49.258419 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:49.258809 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:49.258836 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:49.258779 1760437 retry.go:31] will retry after 2.31868491s: waiting for domain to come up
	I1018 14:08:51.580165 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:51.580745 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:51.580775 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:51.581147 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:51.581179 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:51.581113 1760437 retry.go:31] will retry after 2.275257905s: waiting for domain to come up
	I1018 14:08:53.858516 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:53.859085 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:53.859110 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:53.859415 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:53.859447 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:53.859390 1760437 retry.go:31] will retry after 3.968512343s: waiting for domain to come up
	I1018 14:08:57.829253 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829924 1760410 main.go:141] libmachine: (addons-891059) found domain IP: 192.168.39.100
	I1018 14:08:57.829948 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has current primary IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829954 1760410 main.go:141] libmachine: (addons-891059) reserving static IP address...
	I1018 14:08:57.830357 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find host DHCP lease matching {name: "addons-891059", mac: "52:54:00:12:2f:9d", ip: "192.168.39.100"} in network mk-addons-891059
	I1018 14:08:58.036271 1760410 main.go:141] libmachine: (addons-891059) DBG | Getting to WaitForSSH function...
	I1018 14:08:58.036306 1760410 main.go:141] libmachine: (addons-891059) reserved static IP address 192.168.39.100 for domain addons-891059
	I1018 14:08:58.036334 1760410 main.go:141] libmachine: (addons-891059) waiting for SSH...
	I1018 14:08:58.039556 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040071 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.040113 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040427 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH client type: external
	I1018 14:08:58.040457 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa (-rw-------)
	I1018 14:08:58.040489 1760410 main.go:141] libmachine: (addons-891059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 14:08:58.040505 1760410 main.go:141] libmachine: (addons-891059) DBG | About to run SSH command:
	I1018 14:08:58.040518 1760410 main.go:141] libmachine: (addons-891059) DBG | exit 0
	I1018 14:08:58.178221 1760410 main.go:141] libmachine: (addons-891059) DBG | SSH cmd err, output: <nil>: 
	I1018 14:08:58.178611 1760410 main.go:141] libmachine: (addons-891059) domain creation complete
	I1018 14:08:58.178979 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:58.179725 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.179914 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.180097 1760410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 14:08:58.180117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:08:58.181922 1760410 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 14:08:58.181937 1760410 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 14:08:58.181946 1760410 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 14:08:58.181953 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.184676 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185179 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.185207 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185454 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.185640 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185815 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185930 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.186116 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.186465 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.186483 1760410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 14:08:58.305360 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.305387 1760410 main.go:141] libmachine: Detecting the provisioner...
	I1018 14:08:58.305399 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.308732 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309086 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.309110 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309407 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.309679 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.309898 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.310049 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.310245 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.310526 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.310542 1760410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 14:08:58.429225 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 14:08:58.429329 1760410 main.go:141] libmachine: found compatible host: buildroot
	I1018 14:08:58.429364 1760410 main.go:141] libmachine: Provisioning with buildroot...
	I1018 14:08:58.429383 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429696 1760410 buildroot.go:166] provisioning hostname "addons-891059"
	I1018 14:08:58.429732 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429974 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.433221 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433619 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.433638 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433891 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.434117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434290 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434435 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.434615 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.434828 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.434841 1760410 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-891059 && echo "addons-891059" | sudo tee /etc/hostname
	I1018 14:08:58.571164 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-891059
	
	I1018 14:08:58.571201 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.574587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575023 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.575060 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575255 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.575484 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575818 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.576059 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.576292 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.576310 1760410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-891059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-891059/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-891059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:08:58.705558 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.705593 1760410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 14:08:58.705650 1760410 buildroot.go:174] setting up certificates
	I1018 14:08:58.705677 1760410 provision.go:84] configureAuth start
	I1018 14:08:58.705691 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.706037 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:58.709084 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709428 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.709454 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709701 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.712025 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712527 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.712572 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712679 1760410 provision.go:143] copyHostCerts
	I1018 14:08:58.712765 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 14:08:58.712925 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 14:08:58.713027 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 14:08:58.713099 1760410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.addons-891059 san=[127.0.0.1 192.168.39.100 addons-891059 localhost minikube]
	I1018 14:08:59.195381 1760410 provision.go:177] copyRemoteCerts
	I1018 14:08:59.195454 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:08:59.195481 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.198489 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.198846 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.198881 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.199059 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.199299 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.199483 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.199691 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.292928 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:08:59.325386 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:08:59.357335 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:08:59.389117 1760410 provision.go:87] duration metric: took 683.421516ms to configureAuth
	I1018 14:08:59.389152 1760410 buildroot.go:189] setting minikube options for container-runtime
	I1018 14:08:59.389391 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:08:59.389501 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.392319 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392710 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.392752 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392932 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.393164 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393457 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393687 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.393910 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.394130 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.394146 1760410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:08:59.663506 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:08:59.663540 1760410 main.go:141] libmachine: Checking connection to Docker...
	I1018 14:08:59.663551 1760410 main.go:141] libmachine: (addons-891059) Calling .GetURL
	I1018 14:08:59.665074 1760410 main.go:141] libmachine: (addons-891059) DBG | using libvirt version 8000000
	I1018 14:08:59.668182 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668663 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.668695 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668860 1760410 main.go:141] libmachine: Docker is up and running!
	I1018 14:08:59.668875 1760410 main.go:141] libmachine: Reticulating splines...
	I1018 14:08:59.668883 1760410 client.go:171] duration metric: took 21.185236601s to LocalClient.Create
	I1018 14:08:59.668913 1760410 start.go:167] duration metric: took 21.185315141s to libmachine.API.Create "addons-891059"
	I1018 14:08:59.668930 1760410 start.go:293] postStartSetup for "addons-891059" (driver="kvm2")
	I1018 14:08:59.668947 1760410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:08:59.668967 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.669233 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:08:59.669269 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.671533 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.671957 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.671985 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.672144 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.672364 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.672523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.672667 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.764031 1760410 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:08:59.769115 1760410 info.go:137] Remote host: Buildroot 2025.02
	I1018 14:08:59.769146 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 14:08:59.769224 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 14:08:59.769248 1760410 start.go:296] duration metric: took 100.307576ms for postStartSetup
	I1018 14:08:59.769292 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:59.769961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.773479 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.773901 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.773934 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.774210 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:59.774465 1760410 start.go:128] duration metric: took 21.309794025s to createHost
	I1018 14:08:59.774492 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.777128 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777506 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.777535 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777745 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.777961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.778500 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.778740 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.778756 1760410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 14:08:59.897254 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760796539.858103251
	
	I1018 14:08:59.897279 1760410 fix.go:216] guest clock: 1760796539.858103251
	I1018 14:08:59.897287 1760410 fix.go:229] Guest: 2025-10-18 14:08:59.858103251 +0000 UTC Remote: 2025-10-18 14:08:59.774480854 +0000 UTC m=+21.430607980 (delta=83.622397ms)
	I1018 14:08:59.897336 1760410 fix.go:200] guest clock delta is within tolerance: 83.622397ms
	I1018 14:08:59.897364 1760410 start.go:83] releasing machines lock for "addons-891059", held for 21.432776387s
	I1018 14:08:59.897398 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.897684 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.901076 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901487 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.901521 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901705 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902565 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902783 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902886 1760410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:08:59.902954 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.903079 1760410 ssh_runner.go:195] Run: cat /version.json
	I1018 14:08:59.903102 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.906580 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.906633 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907079 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907125 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907149 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907167 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907386 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907427 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907642 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907647 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907824 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.907846 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.908031 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.908099 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.992932 1760410 ssh_runner.go:195] Run: systemctl --version
	I1018 14:09:00.021820 1760410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:09:00.183446 1760410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:09:00.190803 1760410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:09:00.190911 1760410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:09:00.213058 1760410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:09:00.213091 1760410 start.go:495] detecting cgroup driver to use...
	I1018 14:09:00.213178 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:09:00.233624 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:09:00.252522 1760410 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:09:00.252617 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:09:00.272205 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:09:00.289717 1760410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:09:00.439992 1760410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:09:00.649208 1760410 docker.go:234] disabling docker service ...
	I1018 14:09:00.649292 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:09:00.666373 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:09:00.682992 1760410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:09:00.835422 1760410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:09:00.982700 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:09:00.999428 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:09:01.024799 1760410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:09:01.024906 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.038654 1760410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 14:09:01.038752 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.052374 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.066305 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.080191 1760410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:09:01.094600 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.108084 1760410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.131069 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.144608 1760410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:09:01.156726 1760410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:09:01.156791 1760410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:09:01.180230 1760410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:09:01.193680 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:01.335791 1760410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:09:01.461561 1760410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:09:01.461683 1760410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:09:01.467775 1760410 start.go:563] Will wait 60s for crictl version
	I1018 14:09:01.467870 1760410 ssh_runner.go:195] Run: which crictl
	I1018 14:09:01.472812 1760410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 14:09:01.516410 1760410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 14:09:01.516518 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.548303 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.582529 1760410 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 14:09:01.583814 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:09:01.588147 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588628 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:01.588667 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588973 1760410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 14:09:01.594159 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:01.610280 1760410 kubeadm.go:883] updating cluster {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:09:01.610462 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:09:01.610527 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:01.648777 1760410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 14:09:01.648866 1760410 ssh_runner.go:195] Run: which lz4
	I1018 14:09:01.653595 1760410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 14:09:01.658875 1760410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 14:09:01.658909 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 14:09:03.215465 1760410 crio.go:462] duration metric: took 1.561899205s to copy over tarball
	I1018 14:09:03.215548 1760410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 14:09:04.890701 1760410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.675118935s)
	I1018 14:09:04.890741 1760410 crio.go:469] duration metric: took 1.675237586s to extract the tarball
	I1018 14:09:04.890755 1760410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 14:09:04.933819 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:04.980242 1760410 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:09:04.980269 1760410 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:09:04.980277 1760410 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1018 14:09:04.980412 1760410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-891059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:09:04.980487 1760410 ssh_runner.go:195] Run: crio config
	I1018 14:09:05.031493 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:05.031532 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:05.031561 1760410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:09:05.031594 1760410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-891059 NodeName:addons-891059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:09:05.031791 1760410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-891059"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:09:05.031889 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:09:05.045249 1760410 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:09:05.045322 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:09:05.057594 1760410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1018 14:09:05.079304 1760410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:09:05.101229 1760410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1018 14:09:05.123379 1760410 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1018 14:09:05.128149 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:05.144740 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:05.287867 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:05.310139 1760410 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059 for IP: 192.168.39.100
	I1018 14:09:05.310175 1760410 certs.go:195] generating shared ca certs ...
	I1018 14:09:05.310203 1760410 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.310412 1760410 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 14:09:05.928678 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt ...
	I1018 14:09:05.928717 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt: {Name:mk48305fdb94e31a92b48facef68eec843776b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.928918 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key ...
	I1018 14:09:05.928931 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key: {Name:mk701e118ad43b61f158a839f73ec6b965102354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.929018 1760410 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 14:09:06.043454 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt ...
	I1018 14:09:06.043488 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt: {Name:mk77ddeb4af674721966c75040f4f1fb5d69023d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043679 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key ...
	I1018 14:09:06.043694 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key: {Name:mk65d64f37c13d41fae5e3b77d20098229c0b1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043772 1760410 certs.go:257] generating profile certs ...
	I1018 14:09:06.043835 1760410 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key
	I1018 14:09:06.043862 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt with IP's: []
	I1018 14:09:06.259815 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt ...
	I1018 14:09:06.259852 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: {Name:mk812f759d940b265a8e60c894cb050949fd9e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260037 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key ...
	I1018 14:09:06.260054 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key: {Name:mk50fce6a65f5d969bea0e1a48d418e711ccdfe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260134 1760410 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa
	I1018 14:09:06.260154 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I1018 14:09:06.486406 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa ...
	I1018 14:09:06.486442 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa: {Name:mk13f44e79eaa89077b52da6090b647e00b64732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486629 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa ...
	I1018 14:09:06.486643 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa: {Name:mkbe94bfad32eaf986c1751799d5eb527ff32552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486733 1760410 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt
	I1018 14:09:06.486836 1760410 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key
	I1018 14:09:06.486900 1760410 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key
	I1018 14:09:06.486924 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt with IP's: []
	I1018 14:09:06.798152 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt ...
	I1018 14:09:06.798201 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt: {Name:mk29883864de081c2ef5f64c49afd825bbef9059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798410 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key ...
	I1018 14:09:06.798426 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key: {Name:mk619e894bc6a3076fe0e333221023492d7ff3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798649 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 14:09:06.798690 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:09:06.798715 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:09:06.798735 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 14:09:06.799486 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:09:06.845692 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:09:06.882745 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:09:06.918371 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 14:09:06.952411 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:09:06.985595 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:09:07.018257 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:09:07.051475 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:09:07.086174 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:09:07.118849 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:09:07.141590 1760410 ssh_runner.go:195] Run: openssl version
	I1018 14:09:07.148896 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:09:07.163684 1760410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169573 1760410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169638 1760410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.177781 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:09:07.192577 1760410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:09:07.199705 1760410 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:09:07.199768 1760410 kubeadm.go:400] StartCluster: {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:09:07.199879 1760410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:09:07.199953 1760410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:09:07.241737 1760410 cri.go:89] found id: ""
	I1018 14:09:07.241827 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:09:07.254574 1760410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:09:07.267441 1760410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:09:07.280136 1760410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:09:07.280159 1760410 kubeadm.go:157] found existing configuration files:
	
	I1018 14:09:07.280207 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:09:07.292712 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:09:07.292791 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:09:07.305268 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:09:07.317524 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:09:07.317645 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:09:07.330484 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.342579 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:09:07.342663 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.355673 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:09:07.367952 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:09:07.368036 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:09:07.381331 1760410 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 14:09:07.547925 1760410 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:09:20.098002 1760410 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:09:20.098063 1760410 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:09:20.098145 1760410 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:09:20.098299 1760410 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:09:20.098447 1760410 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:09:20.098529 1760410 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:09:20.100393 1760410 out.go:252]   - Generating certificates and keys ...
	I1018 14:09:20.100495 1760410 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:09:20.100629 1760410 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:09:20.100764 1760410 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:09:20.100857 1760410 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:09:20.100964 1760410 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:09:20.101051 1760410 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:09:20.101129 1760410 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:09:20.101315 1760410 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101405 1760410 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:09:20.101571 1760410 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101672 1760410 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:09:20.101744 1760410 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:09:20.101795 1760410 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:09:20.101843 1760410 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:09:20.101896 1760410 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:09:20.101961 1760410 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:09:20.102011 1760410 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:09:20.102082 1760410 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:09:20.102127 1760410 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:09:20.102199 1760410 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:09:20.102260 1760410 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:09:20.103813 1760410 out.go:252]   - Booting up control plane ...
	I1018 14:09:20.103893 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:09:20.103954 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:09:20.104007 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:09:20.104089 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:09:20.104181 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:09:20.104334 1760410 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:09:20.104446 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:09:20.104482 1760410 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:09:20.104625 1760410 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:09:20.104745 1760410 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:09:20.104820 1760410 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50245312s
	I1018 14:09:20.104902 1760410 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:09:20.104976 1760410 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.100:8443/livez
	I1018 14:09:20.105057 1760410 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:09:20.105126 1760410 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:09:20.105186 1760410 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.213660902s
	I1018 14:09:20.105249 1760410 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.327835251s
	I1018 14:09:20.105309 1760410 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50283692s
	I1018 14:09:20.105410 1760410 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:09:20.105516 1760410 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:09:20.105572 1760410 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:09:20.105752 1760410 kubeadm.go:318] [mark-control-plane] Marking the node addons-891059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:09:20.105817 1760410 kubeadm.go:318] [bootstrap-token] Using token: ci4c4o.8llcllq96muz9osf
	I1018 14:09:20.108036 1760410 out.go:252]   - Configuring RBAC rules ...
	I1018 14:09:20.108126 1760410 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:09:20.108210 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:09:20.108332 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:09:20.108465 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:09:20.108571 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:09:20.108668 1760410 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:09:20.108821 1760410 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:09:20.108863 1760410 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:09:20.108900 1760410 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:09:20.108911 1760410 kubeadm.go:318] 
	I1018 14:09:20.108961 1760410 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:09:20.108967 1760410 kubeadm.go:318] 
	I1018 14:09:20.109026 1760410 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:09:20.109031 1760410 kubeadm.go:318] 
	I1018 14:09:20.109051 1760410 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:09:20.109098 1760410 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:09:20.109140 1760410 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:09:20.109146 1760410 kubeadm.go:318] 
	I1018 14:09:20.109214 1760410 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:09:20.109221 1760410 kubeadm.go:318] 
	I1018 14:09:20.109258 1760410 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:09:20.109264 1760410 kubeadm.go:318] 
	I1018 14:09:20.109311 1760410 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:09:20.109381 1760410 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:09:20.109469 1760410 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:09:20.109488 1760410 kubeadm.go:318] 
	I1018 14:09:20.109554 1760410 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:09:20.109622 1760410 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:09:20.109628 1760410 kubeadm.go:318] 
	I1018 14:09:20.109698 1760410 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.109796 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 \
	I1018 14:09:20.109908 1760410 kubeadm.go:318] 	--control-plane 
	I1018 14:09:20.109934 1760410 kubeadm.go:318] 
	I1018 14:09:20.110067 1760410 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:09:20.110077 1760410 kubeadm.go:318] 
	I1018 14:09:20.110176 1760410 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.110279 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 
	I1018 14:09:20.110293 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:20.110301 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:20.111886 1760410 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 14:09:20.113016 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 14:09:20.127933 1760410 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 14:09:20.158289 1760410 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:09:20.158398 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.158416 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-891059 minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-891059 minikube.k8s.io/primary=true
	I1018 14:09:20.315678 1760410 ops.go:34] apiserver oom_adj: -16
	I1018 14:09:20.315834 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.816073 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.316085 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.816909 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.316182 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.816708 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.316221 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.816476 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.316683 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.414532 1760410 kubeadm.go:1113] duration metric: took 4.256222081s to wait for elevateKubeSystemPrivileges
	I1018 14:09:24.414583 1760410 kubeadm.go:402] duration metric: took 17.214819054s to StartCluster
	I1018 14:09:24.414614 1760410 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.414803 1760410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:09:24.415376 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.415641 1760410 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:09:24.415700 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:09:24.415754 1760410 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:09:24.415887 1760410 addons.go:69] Setting yakd=true in profile "addons-891059"
	I1018 14:09:24.415896 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.415930 1760410 addons.go:238] Setting addon yakd=true in "addons-891059"
	I1018 14:09:24.415941 1760410 addons.go:69] Setting registry-creds=true in profile "addons-891059"
	I1018 14:09:24.415953 1760410 addons.go:238] Setting addon registry-creds=true in "addons-891059"
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.415979 1760410 addons.go:69] Setting volcano=true in profile "addons-891059"
	I1018 14:09:24.415983 1760410 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-891059"
	I1018 14:09:24.415991 1760410 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.415998 1760410 addons.go:69] Setting volumesnapshots=true in profile "addons-891059"
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-891059"
	I1018 14:09:24.415959 1760410 addons.go:69] Setting inspektor-gadget=true in profile "addons-891059"
	I1018 14:09:24.416026 1760410 addons.go:69] Setting storage-provisioner=true in profile "addons-891059"
	I1018 14:09:24.416035 1760410 addons.go:238] Setting addon storage-provisioner=true in "addons-891059"
	I1018 14:09:24.415990 1760410 addons.go:238] Setting addon volcano=true in "addons-891059"
	I1018 14:09:24.416051 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416063 1760410 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.416073 1760410 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-891059"
	I1018 14:09:24.416105 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416110 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416136 1760410 addons.go:69] Setting metrics-server=true in profile "addons-891059"
	I1018 14:09:24.416172 1760410 addons.go:238] Setting addon metrics-server=true in "addons-891059"
	I1018 14:09:24.416211 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416266 1760410 addons.go:69] Setting registry=true in profile "addons-891059"
	I1018 14:09:24.416290 1760410 addons.go:238] Setting addon registry=true in "addons-891059"
	I1018 14:09:24.416318 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416454 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416462 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416496 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416504 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416536 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416546 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416565 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416634 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416702 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon volumesnapshots=true in "addons-891059"
	I1018 14:09:24.416740 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416750 1760410 addons.go:69] Setting cloud-spanner=true in profile "addons-891059"
	I1018 14:09:24.416761 1760410 addons.go:238] Setting addon cloud-spanner=true in "addons-891059"
	I1018 14:09:24.416772 1760410 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-891059"
	I1018 14:09:24.416738 1760410 addons.go:69] Setting gcp-auth=true in profile "addons-891059"
	I1018 14:09:24.416797 1760410 mustload.go:65] Loading cluster: addons-891059
	I1018 14:09:24.416803 1760410 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:24.416808 1760410 addons.go:69] Setting ingress-dns=true in profile "addons-891059"
	I1018 14:09:24.416054 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416816 1760410 addons.go:69] Setting default-storageclass=true in profile "addons-891059"
	I1018 14:09:24.416827 1760410 addons.go:69] Setting ingress=true in profile "addons-891059"
	I1018 14:09:24.416838 1760410 addons.go:238] Setting addon ingress=true in "addons-891059"
	I1018 14:09:24.416838 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-891059"
	I1018 14:09:24.416009 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-891059"
	I1018 14:09:24.416036 1760410 addons.go:238] Setting addon inspektor-gadget=true in "addons-891059"
	I1018 14:09:24.416819 1760410 addons.go:238] Setting addon ingress-dns=true in "addons-891059"
	I1018 14:09:24.417180 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417202 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417220 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417277 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417301 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417457 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417670 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417700 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417772 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417855 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417889 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417365 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418030 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418152 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.418393 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418444 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418552 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418624 1760410 out.go:179] * Verifying Kubernetes components...
	I1018 14:09:24.418907 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418967 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.422570 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422950 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.423390 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.423424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.425453 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:24.428788 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.428847 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.432739 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.432818 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.446515 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I1018 14:09:24.447603 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.448044 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I1018 14:09:24.448620 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.449130 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.449150 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450319 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.450375 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450390 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452314 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452974 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.453024 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.455440 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I1018 14:09:24.456592 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.456640 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.459616 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I1018 14:09:24.459757 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I1018 14:09:24.459794 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I1018 14:09:24.460277 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.460735 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I1018 14:09:24.460955 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463457 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463624 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463650 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.463943 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463970 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.464096 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.464766 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.464811 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.466143 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.466259 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.466646 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.467503 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.467526 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.468700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.468724 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.469056 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.469102 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.469455 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.469522 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.470074 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.470106 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.470616 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.470636 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.471024 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1018 14:09:24.471853 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.472590 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.472616 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.473010 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.473088 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473315 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473750 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I1018 14:09:24.474289 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.474360 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.474951 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.477612 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I1018 14:09:24.478762 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.479308 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.479333 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.479844 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.480258 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.480895 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.482303 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1018 14:09:24.483440 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.483700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483715 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.483863 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483872 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.484222 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.484556 1760410 addons.go:238] Setting addon default-storageclass=true in "addons-891059"
	I1018 14:09:24.484598 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.484735 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.484774 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.484961 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.485003 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.485644 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.486185 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.486221 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.488758 1760410 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-891059"
	I1018 14:09:24.488809 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489181 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.489230 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.489519 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489701 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I1018 14:09:24.494198 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1018 14:09:24.495236 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.496047 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.496066 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.496101 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I1018 14:09:24.496638 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I1018 14:09:24.496952 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.497036 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1018 14:09:24.497223 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497670 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497914 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.498318 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1018 14:09:24.498718 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498744 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499070 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499580 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.499603 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499631 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.499736 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.500137 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.500171 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500183 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500231 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500253 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500704 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.500747 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501004 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501037 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.501047 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.501852 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501890 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.505372 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.505855 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508424 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.508460 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508580 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.509093 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.509143 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.510293 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I1018 14:09:24.510851 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.511364 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.512160 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.512181 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.512251 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.513848 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:09:24.513854 1760410 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:09:24.515867 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:09:24.515885 1760410 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:09:24.515912 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.516312 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.517033 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.517295 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.517359 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519170 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519288 1760410 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:09:24.520436 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:24.520516 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:09:24.520549 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521274 1760410 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:24.521295 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:09:24.521320 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521822 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1018 14:09:24.522725 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.523307 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.523325 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.523932 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.524192 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.527503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.527590 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527618 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.527649 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1018 14:09:24.528451 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.528456 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.528513 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.528706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.528847 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.529262 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.529279 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.529677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.529956 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.530621 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I1018 14:09:24.531189 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.531587 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1018 14:09:24.532552 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.532587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.533165 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.533199 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.534272 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.534329 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.534670 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1018 14:09:24.534888 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.534927 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.534934 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.535018 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I1018 14:09:24.535456 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536405 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.536423 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.536459 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536498 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.536522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.536586 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.536638 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.536641 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536797 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536878 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.537335 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.537386 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.537814 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I1018 14:09:24.537939 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538069 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.538085 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.538431 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538510 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.538875 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.539073 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.539143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:09:24.540559 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.540650 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.540661 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:09:24.540789 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1018 14:09:24.541394 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541512 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.541542 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.541580 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.542392 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.542582 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.542593 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.541968 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541995 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.542027 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.541787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.542477 1760410 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:24.542769 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:09:24.542787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.543139 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.543258 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:09:24.543232 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.543329 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.544059 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.544119 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.544691 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.544728 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.545623 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.545670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.547151 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:09:24.547560 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.548774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.548901 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:09:24.549486 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.549513 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.549520 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1018 14:09:24.549555 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.549743 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.549944 1760410 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:09:24.549986 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:09:24.550111 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.550462 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.550548 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.551322 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:09:24.551448 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:09:24.551471 1760410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:09:24.551503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.552417 1760410 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:09:24.552611 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.552668 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.552694 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.553138 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.553466 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:09:24.553546 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:09:24.553557 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:09:24.553575 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.555796 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:09:24.556091 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.556537 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.559463 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:09:24.560143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I1018 14:09:24.560689 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:09:24.560709 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:09:24.560733 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.561360 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.562223 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.562248 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.562334 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1018 14:09:24.564735 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564798 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.564809 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.564889 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564947 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1018 14:09:24.565207 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.565656 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.565686 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.565804 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.565867 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.566012 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.566138 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.566251 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.566837 1760410 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:09:24.566841 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.566954 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.567074 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.567098 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.567382 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.567544 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.567609 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.567849 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.568018 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.568167 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.568390 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:24.568518 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:09:24.568539 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.568408 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.569303 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.569321 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.569601 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1018 14:09:24.569798 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.569904 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I1018 14:09:24.570247 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570534 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570627 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.570989 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.571754 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.571776 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.571809 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.571835 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.571888 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.571942 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.572034 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I1018 14:09:24.572101 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572114 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.572301 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.572420 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.572512 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.572532 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.572545 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:24.572552 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572560 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.573079 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.573081 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.573095 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.573102 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.573108 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.573114 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:24.573205 1760410 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:09:24.573206 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.573377 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.573909 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.574598 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.574613 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.574986 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.575284 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.575403 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.576055 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I1018 14:09:24.576282 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.576635 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.576750 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577145 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.577164 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.577387 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577425 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578449 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578485 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:09:24.578527 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.578725 1760410 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:24.578741 1760410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:09:24.578760 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.578783 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.579845 1760410 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:09:24.579890 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:09:24.579901 1760410 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:09:24.579916 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.579866 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.579966 1760410 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:09:24.581298 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.581518 1760410 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:09:24.581555 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:24.581566 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:09:24.581582 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.581701 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:09:24.581733 1760410 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:09:24.581762 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582432 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.582611 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.582663 1760410 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:24.582679 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:09:24.582698 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582744 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.583429 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.583635 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.583761 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.583832 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.584362 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I1018 14:09:24.584568 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:09:24.585155 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.585916 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.585938 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.586019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.586361 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:24.586383 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:09:24.586403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.586683 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.586913 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.587506 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587537 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.587565 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587802 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.587988 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.588388 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.588708 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.588631 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.588734 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.589129 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.589325 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.589522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.590171 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.590296 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.590321 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.590811 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591126 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591174 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591319 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.591484 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.591739 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591761 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591773 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.591922 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592011 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592200 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592253 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.592273 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.592387 1760410 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:09:24.592403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592465 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592624 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592714 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592859 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592993 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.593164 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593741 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.593774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593963 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.594146 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.594295 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.594464 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.595795 1760410 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:09:24.597040 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:24.597063 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:09:24.597082 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.600612 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.600998 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.601019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.601363 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.601584 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.601753 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.601908 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	W1018 14:09:24.742102 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.742153 1760410 retry.go:31] will retry after 155.166839ms: ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	W1018 14:09:24.905499 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.905539 1760410 retry.go:31] will retry after 290.251665ms: ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:25.195583 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:09:25.195661 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:25.238678 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:09:25.238705 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:09:25.239580 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:25.243439 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:25.244497 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:25.264037 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:25.312273 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:25.315550 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:09:25.315578 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:09:25.320939 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:25.324940 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:09:25.324962 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:09:25.327771 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:25.328434 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:25.339706 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:09:25.339737 1760410 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:09:25.369886 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:09:25.369914 1760410 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:09:25.370459 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:25.537261 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:09:25.537300 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:09:25.585100 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:09:25.585145 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:09:25.685376 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:09:25.685407 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:09:25.768517 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:09:25.768553 1760410 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:09:25.768978 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:25.769004 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:09:25.814134 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:25.814164 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:09:25.853698 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:25.853731 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:09:26.014188 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:09:26.014222 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:09:26.060465 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:09:26.060498 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:09:26.091905 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:09:26.091940 1760410 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:09:26.114081 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:26.248999 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:26.271395 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:26.432032 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:09:26.432068 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:09:26.436207 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:09:26.436242 1760410 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:09:26.558205 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:26.558233 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:09:26.717226 1760410 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:26.717268 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:09:26.717225 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:09:26.717386 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:09:26.825284 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:27.137937 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:09:27.137970 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:09:27.440610 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:27.873332 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:09:27.873382 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:09:28.056527 1760410 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.860893783s)
	I1018 14:09:28.056563 1760410 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 14:09:28.056618 1760410 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.860884504s)
	I1018 14:09:28.056693 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.817081387s)
	I1018 14:09:28.056751 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056765 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.056766 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.813291284s)
	I1018 14:09:28.056811 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056828 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057259 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057276 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057280 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057300 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057326 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057416 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057439 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057482 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057493 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057712 1760410 node_ready.go:35] waiting up to 6m0s for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.057737 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057777 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057784 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057851 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057951 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057965 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.062488 1760410 node_ready.go:49] node "addons-891059" is "Ready"
	I1018 14:09:28.062522 1760410 node_ready.go:38] duration metric: took 4.780102ms for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.062537 1760410 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:09:28.062602 1760410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:09:28.633793 1760410 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-891059" context rescaled to 1 replicas
	I1018 14:09:28.657122 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:09:28.657153 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:09:29.297640 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:09:29.297673 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:09:29.722108 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:29.722138 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:09:30.201846 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:31.747160 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.502603848s)
	I1018 14:09:31.747234 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747249 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747635 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.747662 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.747675 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747685 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.748000 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.989912 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:09:31.989960 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:31.993852 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994463 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:31.994498 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994763 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:31.995004 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:31.995210 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:31.995372 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:32.401099 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:09:32.582819 1760410 addons.go:238] Setting addon gcp-auth=true in "addons-891059"
	I1018 14:09:32.582898 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:32.583276 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.583338 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.598366 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1018 14:09:32.598979 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.599565 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.599588 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.599990 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.600582 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.600654 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.615909 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1018 14:09:32.616524 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.616999 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.617024 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.617441 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.617696 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:32.619651 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:32.619882 1760410 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:09:32.619905 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:32.623262 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.623788 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:32.623815 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.624039 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:32.624251 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:32.624440 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:32.624678 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:34.410431 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.146350667s)
	I1018 14:09:34.410505 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410520 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410535 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.098229729s)
	I1018 14:09:34.410591 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410608 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410627 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.08966013s)
	I1018 14:09:34.410671 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410688 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410780 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.082972673s)
	I1018 14:09:34.410825 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410842 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410885 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.082422149s)
	I1018 14:09:34.410912 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410921 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410996 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.040510674s)
	I1018 14:09:34.411019 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411040 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411044 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411064 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411075 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411083 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411111 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411122 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.29701229s)
	I1018 14:09:34.411143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411148 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411161 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411170 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411178 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411185 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411186 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411194 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411202 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411209 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411237 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.162212378s)
	W1018 14:09:34.411260 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411279 1760410 retry.go:31] will retry after 156.548971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411277 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411304 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411320 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411329 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411355 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411385 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.139958439s)
	I1018 14:09:34.411415 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411426 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411451 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.586135977s)
	I1018 14:09:34.411563 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411581 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411476 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413776 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413792 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413803 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413813 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413821 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413830 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413837 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413839 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413857 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413878 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413884 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413892 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413899 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413949 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413963 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413984 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413993 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414003 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414010 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.414017 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.414067 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414253 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414280 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414288 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414297 1760410 addons.go:479] Verifying addon metrics-server=true in "addons-891059"
	I1018 14:09:34.414448 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414488 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414509 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414541 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.415992 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416015 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416023 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416037 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416049 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416063 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.415991 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416140 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416177 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416185 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416194 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.416025 1760410 addons.go:479] Verifying addon ingress=true in "addons-891059"
	I1018 14:09:34.416625 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416635 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413977 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416602 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416980 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416993 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418102 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.418150 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.418163 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418177 1760410 addons.go:479] Verifying addon registry=true in "addons-891059"
	I1018 14:09:34.418831 1760410 out.go:179] * Verifying ingress addon...
	I1018 14:09:34.418835 1760410 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-891059 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:09:34.420852 1760410 out.go:179] * Verifying registry addon...
	I1018 14:09:34.422521 1760410 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:09:34.423238 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:09:34.503158 1760410 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:09:34.503192 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.503257 1760410 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:09:34.503271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.568542 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:34.621858 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.621880 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.622193 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.622248 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.622262 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:34.622394 1760410 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:09:34.659969 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.659996 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.660315 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.660316 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.660354 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.941419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.942360 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.990391 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.549686758s)
	I1018 14:09:34.990429 1760410 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.927791238s)
	I1018 14:09:34.990461 1760410 api_server.go:72] duration metric: took 10.57479054s to wait for apiserver process to appear ...
	W1018 14:09:34.990458 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990494 1760410 retry.go:31] will retry after 178.461593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990467 1760410 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:09:34.990545 1760410 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1018 14:09:35.010676 1760410 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1018 14:09:35.013686 1760410 api_server.go:141] control plane version: v1.34.1
	I1018 14:09:35.013719 1760410 api_server.go:131] duration metric: took 23.188895ms to wait for apiserver health ...
	I1018 14:09:35.013750 1760410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:09:35.060072 1760410 system_pods.go:59] 16 kube-system pods found
	I1018 14:09:35.060119 1760410 system_pods.go:61] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.060127 1760410 system_pods.go:61] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060138 1760410 system_pods.go:61] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060145 1760410 system_pods.go:61] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.060149 1760410 system_pods.go:61] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.060152 1760410 system_pods.go:61] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.060157 1760410 system_pods.go:61] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.060160 1760410 system_pods.go:61] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.060163 1760410 system_pods.go:61] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.060168 1760410 system_pods.go:61] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.060178 1760410 system_pods.go:61] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.060186 1760410 system_pods.go:61] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.060194 1760410 system_pods.go:61] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.060203 1760410 system_pods.go:61] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.060209 1760410 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.060218 1760410 system_pods.go:61] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.060229 1760410 system_pods.go:74] duration metric: took 46.469158ms to wait for pod list to return data ...
	I1018 14:09:35.060248 1760410 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:09:35.104632 1760410 default_sa.go:45] found service account: "default"
	I1018 14:09:35.104663 1760410 default_sa.go:55] duration metric: took 44.40546ms for default service account to be created ...
	I1018 14:09:35.104677 1760410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:09:35.169265 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:35.176957 1760410 system_pods.go:86] 17 kube-system pods found
	I1018 14:09:35.177007 1760410 system_pods.go:89] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.177019 1760410 system_pods.go:89] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177052 1760410 system_pods.go:89] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177068 1760410 system_pods.go:89] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.177079 1760410 system_pods.go:89] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.177087 1760410 system_pods.go:89] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.177100 1760410 system_pods.go:89] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.177106 1760410 system_pods.go:89] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.177117 1760410 system_pods.go:89] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.177125 1760410 system_pods.go:89] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.177134 1760410 system_pods.go:89] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.177145 1760410 system_pods.go:89] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.177156 1760410 system_pods.go:89] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.177171 1760410 system_pods.go:89] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.177180 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.177187 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzhfk" [f3e3fb2c-05b7-448d-bca6-3438d70868b1] Pending
	I1018 14:09:35.177198 1760410 system_pods.go:89] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.177213 1760410 system_pods.go:126] duration metric: took 72.526149ms to wait for k8s-apps to be running ...
	I1018 14:09:35.177228 1760410 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:09:35.177303 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:09:35.445832 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.461317 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:35.939729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.942319 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.445234 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.243330128s)
	I1018 14:09:36.445310 1760410 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.825399752s)
	I1018 14:09:36.445314 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445449 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.445853 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:36.445924 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.445941 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.445953 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445962 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.446272 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.446292 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.446304 1760410 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:36.447257 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:09:36.448070 1760410 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:09:36.449546 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:36.450329 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:09:36.450870 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:09:36.450894 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:09:36.458277 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.471857 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.484451 1760410 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:09:36.484481 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:36.597464 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:09:36.597499 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:09:36.732996 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.733028 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:09:36.885741 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.948270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.948391 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.960478 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.436446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.439412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.456938 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.927403 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.928102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.956527 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.404132 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.835532164s)
	W1018 14:09:38.404196 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.404224 1760410 retry.go:31] will retry after 203.009637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.433864 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.434743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.531382 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.607892 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:38.751077 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.58176118s)
	I1018 14:09:38.751130 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751161 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751178 1760410 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.573842033s)
	I1018 14:09:38.751219 1760410 system_svc.go:56] duration metric: took 3.573986856s WaitForService to wait for kubelet
	I1018 14:09:38.751238 1760410 kubeadm.go:586] duration metric: took 14.335564787s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:09:38.751274 1760410 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:09:38.751483 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.751506 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751516 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.751529 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751536 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751791 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751808 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.851019 1760410 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 14:09:38.851051 1760410 node_conditions.go:123] node cpu capacity is 2
	I1018 14:09:38.851069 1760410 node_conditions.go:105] duration metric: took 99.788234ms to run NodePressure ...
	I1018 14:09:38.851086 1760410 start.go:241] waiting for startup goroutines ...
	I1018 14:09:38.908065 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.022268979s)
	I1018 14:09:38.908143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908165 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908474 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908500 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908510 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908518 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908801 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908819 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908845 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.909928 1760410 addons.go:479] Verifying addon gcp-auth=true in "addons-891059"
	I1018 14:09:38.911794 1760410 out.go:179] * Verifying gcp-auth addon...
	I1018 14:09:38.913871 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:09:38.969859 1760410 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:09:38.969881 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:38.979126 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.979302 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.999385 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.427914 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.428338 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.431173 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.465614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.930950 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.936675 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.942841 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.965308 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.421639 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.429893 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:40.429965 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:40.457177 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.676324 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.068378617s)
	W1018 14:09:40.676402 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.676434 1760410 retry.go:31] will retry after 741.361151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.925104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.933643 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.024046 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.027134 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.418785 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:41.422791 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.437450 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.437815 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.458160 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.920933 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.931994 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.932787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.954074 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.420874 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.427884 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.432996 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.455566 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.935811 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.935897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.936364 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.948192 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529349883s)
	W1018 14:09:42.948266 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.948305 1760410 retry.go:31] will retry after 603.252738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.961547 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.421694 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.425963 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.432125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.454728 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.552443 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:43.920168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.926196 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.932562 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.954780 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.418856 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.434761 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.434815 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.485100 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.719803 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.167302475s)
	W1018 14:09:44.719876 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.719906 1760410 retry.go:31] will retry after 756.582939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.919572 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.929974 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.930622 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.954972 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.419454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.431537 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.435706 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.458249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.477327 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:45.921959 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.932928 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.933443 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.960253 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.424197 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.434428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.437611 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.457951 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.721183 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243789601s)
	W1018 14:09:46.721253 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.721284 1760410 retry.go:31] will retry after 1.22541109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.920063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.927281 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.930483 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.954658 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.422281 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.427164 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.431758 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.456565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.926249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.939833 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.940075 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.946922 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:47.966036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.420073 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.432202 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.434126 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.457282 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.920393 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.930362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.932858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.957018 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:49.201980 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255004165s)
	W1018 14:09:49.202036 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.202059 1760410 retry.go:31] will retry after 2.58897953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.420911 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:49.428333 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:49.430869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:49.457131 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.368228 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.377051 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.476106 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.476372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.479024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.479966 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.920534 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.935331 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.938361 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.961186 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.424118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.430809 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.432102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.455044 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.791362 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:51.922858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.934999 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.935987 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.958913 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.642039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.642370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.644501 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.644727 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.918752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.926588 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.930871 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.956219 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.183831 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392411457s)
	W1018 14:09:53.183895 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.183924 1760410 retry.go:31] will retry after 4.131889795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.417891 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.426911 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.428495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.454047 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.919491 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.929299 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.929427 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.958043 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.418456 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.427470 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.427657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.456313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.919925 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.927822 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.928397 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.955119 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.419222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.429271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.430752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.455541 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.918460 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.928654 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.930176 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.958687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.417289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.426666 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.426937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.456516 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.921455 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.931545 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.932200 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.957601 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.316649 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:57.422032 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.435023 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.437778 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.455440 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.921161 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.929313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.929394 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.955970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.423288 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.439731 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.440095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.786495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.919590 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.930253 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.932272 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.957912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.980642 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.663942768s)
	W1018 14:09:58.980696 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:58.980722 1760410 retry.go:31] will retry after 6.037644719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:59.421401 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.428863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.429465 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.458445 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:59.918316 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.928753 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.928856 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.955245 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.418136 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.427048 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.428214 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.457368 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.919392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.929649 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.931313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.959561 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.420084 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.426435 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.428419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.463886 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.918664 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.927921 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.927979 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.954513 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.417929 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.426037 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.428261 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.455407 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.922146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.928949 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.933375 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.956535 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.420697 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.429208 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.432897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.459039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.918554 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.926959 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.927105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.955657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.418489 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.430359 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.430521 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.456644 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.918502 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.930599 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.930923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.956737 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.018763 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:05.417681 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.428004 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.429827 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.456781 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.917569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.926923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.928124 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.957076 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.036566 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.017738492s)
	W1018 14:10:06.036634 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.036662 1760410 retry.go:31] will retry after 12.004802236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.419404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.429963 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.430297 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:06.457600 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.919260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.929676 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.929775 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.155631 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.427122 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.428776 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.457310 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.922270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.926818 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.929313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.956530 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.418802 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.429772 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.430398 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.456743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.919063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.930278 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.931169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.954708 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.424687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.432292 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.435514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.460217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.923294 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.930199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.931023 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.955035 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.419846 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.426749 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.429140 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.456969 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.953436 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.956917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.957054 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.957495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.418736 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.426419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.430935 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.455617 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.927115 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.931414 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.960289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.418970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.430735 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.433659 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.456647 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.921054 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.928629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.928668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.956226 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.420386 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.427464 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.429090 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.455488 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.918328 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.927700 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.928318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.954810 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.419754 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.425924 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.427917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.455974 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.925112 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.929625 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.933370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.957078 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.428235 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.429169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.457022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.919800 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.936816 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.937017 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.957268 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.417946 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.427385 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.431794 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.456614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.919525 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.926577 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.926658 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.954174 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.421789 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.426437 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.431339 1760410 kapi.go:107] duration metric: took 43.008095172s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:10:17.457873 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.918594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.929987 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.961960 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.042188 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:18.422928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.427500 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.456271 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.919452 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.930289 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.956388 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.361633 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.319335622s)
	W1018 14:10:19.361689 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.361728 1760410 retry.go:31] will retry after 15.164014777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.422771 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.438239 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.456621 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.921757 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.928298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.420260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.427508 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.458936 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.927378 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.955188 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.420104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.426947 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.524486 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.918327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.927194 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.955524 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.423531 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.426633 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.454711 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.921113 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.928945 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.954404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.420637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.430677 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.459231 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.919372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.928323 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.958731 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.420036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.427298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.456668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.919003 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.927657 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.957888 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.421338 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.427501 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.455612 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.918199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.927869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.958203 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.419024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.428832 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.456514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.918247 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.928171 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.956494 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.418446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.430922 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.460225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.934863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.935267 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.956304 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.418276 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.426282 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.455657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.921058 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.928216 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.957699 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.423964 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.429784 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:29.459912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.919968 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.926486 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.021594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.431798 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.435432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.456454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.930069 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.943105 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.955957 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.429432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.438231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.455431 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.921095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.931309 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.956251 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.420152 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.428240 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.458714 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.922542 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.930043 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.957260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.419500 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.428933 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.455363 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.923146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.929585 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.958835 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.420137 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.426760 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.457114 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.526904 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:34.919159 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.928439 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.955153 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.418928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.426233 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.458485 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.764870 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237905947s)
	W1018 14:10:35.764934 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.764957 1760410 retry.go:31] will retry after 14.798475806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.919540 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.928534 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.955008 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.450125 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.453729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.536855 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.917765 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.925569 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.955287 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.419773 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.427166 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:37.456318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.919552 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.927629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.025256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.424973 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.428550 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.453898 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.919099 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.926293 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.955682 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.418953 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.430007 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.459225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.920652 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.929231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.954710 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.421937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.429412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.480118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.920635 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.929091 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.956998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.426085 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.427988 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.459105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.918797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.926487 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.955036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.420125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.428890 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.454689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.919029 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.927753 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.954419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.422025 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.426830 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.457376 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.917234 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.930520 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.956616 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.419241 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.428799 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.456787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.918484 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.928332 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.961125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.421688 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.427032 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.457168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.919022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.927029 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.959091 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.418637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.429220 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.455413 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.919149 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.926519 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.956560 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.419157 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.427737 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.455569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.918673 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.926052 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.420322 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.430745 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.456105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.922457 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.928328 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.956428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.434222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.437527 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.461279 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.920966 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.929362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.956797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.418327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.430238 1760410 kapi.go:107] duration metric: took 1m16.007712358s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:10:50.456335 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.564457 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:50.917217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.958103 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.421689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.455392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.920286 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.942284 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.377769111s)
	W1018 14:10:51.942338 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:51.942424 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942439 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.942850 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.942873 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:10:51.942875 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:10:51.942891 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942902 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.943167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.943186 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:10:51.943290 1760410 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:10:51.956095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.418797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.455097 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.918142 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.955842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.417788 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.454466 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.928372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.956892 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.421372 1760410 kapi.go:107] duration metric: took 1m15.507497357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:10:54.422977 1760410 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-891059 cluster.
	I1018 14:10:54.424170 1760410 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:10:54.425362 1760410 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:10:54.455256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.954565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.455801 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.954326 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.455155 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.954954 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.455480 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.957998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:58.454831 1760410 kapi.go:107] duration metric: took 1m22.004497442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:10:58.456573 1760410 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:10:58.457854 1760410 addons.go:514] duration metric: took 1m34.042106278s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:10:58.457949 1760410 start.go:246] waiting for cluster config update ...
	I1018 14:10:58.457975 1760410 start.go:255] writing updated cluster config ...
	I1018 14:10:58.458280 1760410 ssh_runner.go:195] Run: rm -f paused
	I1018 14:10:58.466229 1760410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:10:58.470432 1760410 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.477134 1760410 pod_ready.go:94] pod "coredns-66bc5c9577-9t6mk" is "Ready"
	I1018 14:10:58.477163 1760410 pod_ready.go:86] duration metric: took 6.703976ms for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.479169 1760410 pod_ready.go:83] waiting for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.489364 1760410 pod_ready.go:94] pod "etcd-addons-891059" is "Ready"
	I1018 14:10:58.489404 1760410 pod_ready.go:86] duration metric: took 10.207192ms for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.491622 1760410 pod_ready.go:83] waiting for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.497381 1760410 pod_ready.go:94] pod "kube-apiserver-addons-891059" is "Ready"
	I1018 14:10:58.497406 1760410 pod_ready.go:86] duration metric: took 5.754148ms for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.499963 1760410 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.870880 1760410 pod_ready.go:94] pod "kube-controller-manager-addons-891059" is "Ready"
	I1018 14:10:58.870932 1760410 pod_ready.go:86] duration metric: took 370.945889ms for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.070811 1760410 pod_ready.go:83] waiting for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.471322 1760410 pod_ready.go:94] pod "kube-proxy-ckpzl" is "Ready"
	I1018 14:10:59.471383 1760410 pod_ready.go:86] duration metric: took 400.536721ms for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.672128 1760410 pod_ready.go:83] waiting for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071253 1760410 pod_ready.go:94] pod "kube-scheduler-addons-891059" is "Ready"
	I1018 14:11:00.071288 1760410 pod_ready.go:86] duration metric: took 399.125586ms for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071306 1760410 pod_ready.go:40] duration metric: took 1.60503304s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:11:00.118648 1760410 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:11:00.120494 1760410 out.go:179] * Done! kubectl is now configured to use "addons-891059" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.978889049Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1760796568221224098,StartedAt:1760796568481352352,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a3ac992c-4401-40f5-93dd-7a525ec3b2a5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a3ac992c-4401-40f5-93dd-7a525ec3b2a5/containers/kube-proxy/c136a01e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/a3ac992c-4401-40f5-93dd-7a525ec3b2a5/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/a3ac992c-4401-40f5-93dd-7a525ec3b2a5/volumes/kubernetes.io~projected/kube-api-access-4fkhb,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-ckpzl_a3ac992c-4401-40f5-93dd-7a525ec3b2a5/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=22c99e9b-90e1-4014-ad7a-e839254bdd7f name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.979291332Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7f5b73e9-c5f0-48a2-b2ff-2e4d76fd665a name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.979405721Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1760796553717994552,StartedAt:1760796553893018050,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.6.4-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f4360d09804819a4ab0d1ffed7423947/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f4360d09804819a4ab0d1ffed7423947/containers/etcd/d4760c53,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPA
GATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-891059_f4360d09804819a4ab0d1ffed7423947/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7f5b73e9-c5f0-48a2-b2ff-2e4d76fd665a name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.980462666Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2a04685f-da76-409d-beae-9754e9f49c09 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.981062211Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1760796553711062464,StartedAt:1760796553823526145,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1348b107c675acfd26c3d687c91d60c5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1348b107c675acfd26c3d687c91d60c5/containers/kube-controller-manager/99ac7977,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,
HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-891059_1348b107c675acfd26c3d687c91d60c5/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContain
erResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2a04685f-da76-409d-beae-9754e9f49c09 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.981708354Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=20933a3a-8796-4709-bc16-63e2c4e19354 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.981802731Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1760796553668816481,StartedAt:1760796553801663575,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5086595138b36f6eb8ac54e83c6bc182/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5086595138b36f6eb8ac54e83c6bc182/containers/kube-scheduler/71ca387d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-891059_508659513
8b36f6eb8ac54e83c6bc182/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=20933a3a-8796-4709-bc16-63e2c4e19354 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.982737893Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d9ef8215-4282-4612-9f47-a02976c6e7e8 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.983358576Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1760796553606636244,StartedAt:1760796553726166639,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/97082571db3e60e44c3d60e99a384436/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/97082571db3e60e44c3d60e99a384436/containers/kube-apiserver/684e1311,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRel
abel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-891059_97082571db3e60e44c3d60e99a384436/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d9ef8215-4282-4612-9f47-a02976c6e7e8 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.995247078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63f98fe4-5d4b-49ad-91ee-031bbac9e133 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.995330103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63f98fe4-5d4b-49ad-91ee-031bbac9e133 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.997126901Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0189395f-7966-4773-81ed-7576617168d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:12:29 addons-891059 crio[822]: time="2025-10-18 14:12:29.999289909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796749999254294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:497776,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0189395f-7966-4773-81ed-7576617168d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.000400534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b79646a1-0b04-4962-8ab8-3d8e6a96cbd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.000764003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b79646a1-0b04-4962-8ab8-3d8e6a96cbd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.002330148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6be9db168b34b65859e2a83fbc18c461a5bb49d6ad7bba303b588f6380b543,PodSandboxId:0d40ad681440576ca60a0ebc571e472f20c3491afca985ce04d2353688f30b9d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1760796628053161734,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ff678cb9-xt7jp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9ff96a54-feef-40f7-883d-557d20da0d77,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"htt
p\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kubernete
s.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes
.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7cc263f993b071d2f5739d619a7384f2d0c7bffc66c17ef715c37d409878c6,PodSandboxId:0c969633ab3503729449ea3baa764c1275a9f42d3acd7406059bded4be881af0,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1760796616663639232,Labels:map[string]string{io.kubernetes.contain
er.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-tmmvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb52b147-d27f-4a99-9ec8-ffd5f90861e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541478d62a8b3413ce7d1f0e6cf5eeda124ed1193aae54e2f3686911eb6e9fef,PodSandboxId:cf8744f7132e8edf93dc682e9bbccd5e1405ebdb6ed55d3db698ba8d8313cefe,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac889893
6ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1760796613665444485,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-z6m2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e32c82d5-bbaf-47cf-a6dd-4488d4e419e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994a710f664286f4f32da05934f7d105555c9f461da0e0d8aa1d59d4491b88c,PodSandboxId:c55e42c37ec069282d11458553e01a94da36a92dd441bde1d986e078ed756519,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:
&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1760796595968488301,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-5z8tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e21578d-6373-41a1-aaa9-7c86d80f9c8c,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMe
tadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:
amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c
5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b79646a1-0b04-4962-8ab8-3d8e6a96cbd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.049787123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27ef1dd7-f3b7-49f9-9c5f-18327a849a12 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.049937084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27ef1dd7-f3b7-49f9-9c5f-18327a849a12 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.051318896Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=b6585825-0a5b-4017-9beb-5e8c24922497 name=/runtime.v1.RuntimeService/Status
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.051392967Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b6585825-0a5b-4017-9beb-5e8c24922497 name=/runtime.v1.RuntimeService/Status
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.052373728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbfd40bc-9fab-4b63-838e-5eed2e224d9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.054925539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796750054833338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:497776,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbfd40bc-9fab-4b63-838e-5eed2e224d9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.055681457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f29fcdc0-c792-4893-b8db-2bbc5e613b9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.055762903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f29fcdc0-c792-4893-b8db-2bbc5e613b9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:12:30 addons-891059 crio[822]: time="2025-10-18 14:12:30.056346770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6be9db168b34b65859e2a83fbc18c461a5bb49d6ad7bba303b588f6380b543,PodSandboxId:0d40ad681440576ca60a0ebc571e472f20c3491afca985ce04d2353688f30b9d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1760796628053161734,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ff678cb9-xt7jp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 9ff96a54-feef-40f7-883d-557d20da0d77,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"htt
p\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kubernete
s.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes
.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7cc263f993b071d2f5739d619a7384f2d0c7bffc66c17ef715c37d409878c6,PodSandboxId:0c969633ab3503729449ea3baa764c1275a9f42d3acd7406059bded4be881af0,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1760796616663639232,Labels:map[string]string{io.kubernetes.contain
er.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-tmmvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb52b147-d27f-4a99-9ec8-ffd5f90861e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:541478d62a8b3413ce7d1f0e6cf5eeda124ed1193aae54e2f3686911eb6e9fef,PodSandboxId:cf8744f7132e8edf93dc682e9bbccd5e1405ebdb6ed55d3db698ba8d8313cefe,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac889893
6ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1760796613665444485,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-z6m2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e32c82d5-bbaf-47cf-a6dd-4488d4e419e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994a710f664286f4f32da05934f7d105555c9f461da0e0d8aa1d59d4491b88c,PodSandboxId:c55e42c37ec069282d11458553e01a94da36a92dd441bde1d986e078ed756519,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:
&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1760796595968488301,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-5z8tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e21578d-6373-41a1-aaa9-7c86d80f9c8c,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMe
tadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:
amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c
5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f29fcdc0-c792-4893-b8db-2bbc5e613b9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a4019b2f5a82e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   871fa03a65061       busybox
	2d5e462bcd2b5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	e429add87fb79       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	0c154e6ad0036       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	34e42c0ad16a7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	90ce2976bee33       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             About a minute ago   Running             controller                               0                   2f9eb14649244       ingress-nginx-controller-675c5ddd98-bphwz
	9830a2003573c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	8b41579872800       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   d23e703cbfeb7       csi-hostpath-resizer-0
	e6b6304f138a1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	3781d3641f70c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   2d23bcaba0416       snapshot-controller-7d9fbc56b8-bzhfk
	9bb6d569a2a3f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   7a44187aa2259       snapshot-controller-7d9fbc56b8-b9tnq
	a6267021fe474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   About a minute ago   Exited              patch                                    0                   7483a2b2bce44       ingress-nginx-admission-patch-lz2l5
	c8e7865273085       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   19bb29e5d6915       csi-hostpath-attacher-0
	405281ec9edfa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   About a minute ago   Exited              create                                   0                   784fb9851d0e3       ingress-nginx-admission-create-nbrm2
	0b6be9db168b3       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              2 minutes ago        Running             yakd                                     0                   0d40ad6814405       yakd-dashboard-5ff678cb9-xt7jp
	c389fedf82c73       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             2 minutes ago        Running             local-path-provisioner                   0                   f6cf7a6905b38       local-path-provisioner-648f6765c9-kj8pr
	751b2df6a5bf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            2 minutes ago        Running             gadget                                   0                   e7adc46dd97a6       gadget-bz8k2
	9d7cc263f993b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago        Running             registry-proxy                           0                   0c969633ab350       registry-proxy-tmmvd
	541478d62a8b3       docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e                                           2 minutes ago        Running             registry                                 0                   cf8744f7132e8       registry-6b586f9694-z6m2x
	3faa5d947b9ed       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               2 minutes ago        Running             minikube-ingress-dns                     0                   04626452678ec       kube-ingress-dns-minikube
	6994a710f6642       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                                     2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   c55e42c37ec06       nvidia-device-plugin-daemonset-5z8tb
	da75007bac0f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   bf130a85fe68d       storage-provisioner
	90350cf8ae050       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago        Running             amd-gpu-device-plugin                    0                   b439dd6e51abd       amd-gpu-device-plugin-c5cbb
	5b099b5b37807       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   ba30da275bea1       coredns-66bc5c9577-9t6mk
	97e1670c81585       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago        Running             kube-proxy                               0                   8fb6c60415fda       kube-proxy-ckpzl
	873a633e0ebfd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             3 minutes ago        Running             kube-controller-manager                  0                   4b35987ede042       kube-controller-manager-addons-891059
	4f010fdc156cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             3 minutes ago        Running             etcd                                     0                   bfa6fdc1baf4d       etcd-addons-891059
	50cc3d2477595       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             3 minutes ago        Running             kube-scheduler                           0                   b783fc0f686a0       kube-scheduler-addons-891059
	550e8ca214589       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             3 minutes ago        Running             kube-apiserver                           0                   c8fbc229d4f5f       kube-apiserver-addons-891059
	
	
	==> coredns [5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925] <==
	[INFO] 127.0.0.1:53711 - 54759 "HINFO IN 5610693908805463987.8434740510981182027. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.49978208s
	[INFO] 10.244.0.8:38553 - 27967 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000377351s
	[INFO] 10.244.0.8:38553 - 24958 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000394197s
	[INFO] 10.244.0.8:38553 - 10771 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000293762s
	[INFO] 10.244.0.8:38553 - 9899 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001036278s
	[INFO] 10.244.0.8:38553 - 22142 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077446s
	[INFO] 10.244.0.8:38553 - 18342 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000125153s
	[INFO] 10.244.0.8:38553 - 29995 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000099874s
	[INFO] 10.244.0.8:38553 - 35504 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000072442s
	[INFO] 10.244.0.8:41254 - 10457 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126469s
	[INFO] 10.244.0.8:41254 - 10148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000351753s
	[INFO] 10.244.0.8:58812 - 14712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165201s
	[INFO] 10.244.0.8:58812 - 14408 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000227737s
	[INFO] 10.244.0.8:46072 - 17563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089989s
	[INFO] 10.244.0.8:46072 - 17331 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000357865s
	[INFO] 10.244.0.8:44214 - 24523 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103993s
	[INFO] 10.244.0.8:44214 - 24308 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319225s
	[INFO] 10.244.0.23:53101 - 38230 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000789741s
	[INFO] 10.244.0.23:39743 - 4637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014608s
	[INFO] 10.244.0.23:34680 - 45484 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000257617s
	[INFO] 10.244.0.23:57667 - 2834 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156321s
	[INFO] 10.244.0.23:49060 - 9734 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000228026s
	[INFO] 10.244.0.23:49380 - 40146 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011544s
	[INFO] 10.244.0.23:59610 - 60837 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001192659s
	[INFO] 10.244.0.23:43936 - 55741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001950004s
	
	
	==> describe nodes <==
	Name:               addons-891059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-891059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-891059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-891059
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-891059"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:09:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-891059
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:12:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:11:33 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:11:33 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:11:33 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:11:33 +0000   Sat, 18 Oct 2025 14:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-891059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 372d92314fa4448095fc5052e6676096
	  System UUID:                372d9231-4fa4-4480-95fc-5052e6676096
	  Boot ID:                    7e38709f-8590-4225-8b4d-3bbac20f6c51
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  default                     registry-test                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  gadget                      gadget-bz8k2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bphwz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         2m57s
	  kube-system                 amd-gpu-device-plugin-c5cbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  kube-system                 coredns-66bc5c9577-9t6mk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 csi-hostpathplugin-65z6z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 etcd-addons-891059                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m11s
	  kube-system                 kube-apiserver-addons-891059                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 kube-controller-manager-addons-891059        200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-ckpzl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kube-scheduler-addons-891059                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 nvidia-device-plugin-daemonset-5z8tb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 registry-6b586f9694-z6m2x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 registry-creds-764b6fb674-sg8jp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  kube-system                 registry-proxy-tmmvd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 snapshot-controller-7d9fbc56b8-b9tnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-bzhfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  local-path-storage          local-path-provisioner-648f6765c9-kj8pr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xt7jp               0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     2m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m     kube-proxy       
	  Normal  Starting                 3m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m11s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m11s  kubelet          Node addons-891059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m11s  kubelet          Node addons-891059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m11s  kubelet          Node addons-891059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m10s  kubelet          Node addons-891059 status is now: NodeReady
	  Normal  RegisteredNode           3m7s   node-controller  Node addons-891059 event: Registered Node addons-891059 in Controller
	
	
	==> dmesg <==
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct18 14:09] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097477] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.131235] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.024674] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.704116] kauditd_printk_skb: 297 callbacks suppressed
	[  +0.252518] kauditd_printk_skb: 227 callbacks suppressed
	[  +0.620971] kauditd_printk_skb: 414 callbacks suppressed
	[ +15.304937] kauditd_printk_skb: 49 callbacks suppressed
	[Oct18 14:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.485780] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.577564] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.762881] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.526985] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.667244] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.038951] kauditd_printk_skb: 160 callbacks suppressed
	[  +5.632898] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.124721] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:11] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.104883] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000298] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.819366] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552] <==
	{"level":"info","ts":"2025-10-18T14:09:58.764509Z","caller":"traceutil/trace.go:172","msg":"trace[465774951] range","detail":"{range_begin:/registry/roles/gadget/gadget-role; range_end:; response_count:1; response_revision:905; }","duration":"302.426647ms","start":"2025-10-18T14:09:58.462077Z","end":"2025-10-18T14:09:58.764504Z","steps":["trace[465774951] 'range keys from in-memory index tree'  (duration: 302.316054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:09:58.764522Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:09:58.462057Z","time spent":"302.461198ms","remote":"127.0.0.1:54116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":951,"request content":"key:\"/registry/roles/gadget/gadget-role\" limit:1 "}
	{"level":"info","ts":"2025-10-18T14:10:00.853965Z","caller":"traceutil/trace.go:172","msg":"trace[434268559] linearizableReadLoop","detail":"{readStateIndex:932; appliedIndex:932; }","duration":"182.950105ms","start":"2025-10-18T14:10:00.670985Z","end":"2025-10-18T14:10:00.853935Z","steps":["trace[434268559] 'read index received'  (duration: 182.94367ms)","trace[434268559] 'applied index is now lower than readState.Index'  (duration: 5.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:10:00.854082Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.099754ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:00.854101Z","caller":"traceutil/trace.go:172","msg":"trace[58497677] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:907; }","duration":"183.136547ms","start":"2025-10-18T14:10:00.670959Z","end":"2025-10-18T14:10:00.854096Z","steps":["trace[58497677] 'agreement among raft nodes before linearized reading'  (duration: 183.079771ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:00.854381Z","caller":"traceutil/trace.go:172","msg":"trace[1650576472] transaction","detail":"{read_only:false; response_revision:908; number_of_response:1; }","duration":"235.507231ms","start":"2025-10-18T14:10:00.618865Z","end":"2025-10-18T14:10:00.854372Z","steps":["trace[1650576472] 'process raft request'  (duration: 235.404804ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:00.868040Z","caller":"traceutil/trace.go:172","msg":"trace[748641497] transaction","detail":"{read_only:false; response_revision:909; number_of_response:1; }","duration":"131.604046ms","start":"2025-10-18T14:10:00.736352Z","end":"2025-10-18T14:10:00.867956Z","steps":["trace[748641497] 'process raft request'  (duration: 128.931403ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:07.138101Z","caller":"traceutil/trace.go:172","msg":"trace[585135135] linearizableReadLoop","detail":"{readStateIndex:954; appliedIndex:954; }","duration":"194.477234ms","start":"2025-10-18T14:10:06.943598Z","end":"2025-10-18T14:10:07.138075Z","steps":["trace[585135135] 'read index received'  (duration: 194.391262ms)","trace[585135135] 'applied index is now lower than readState.Index'  (duration: 84.538µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:10:07.138478Z","caller":"traceutil/trace.go:172","msg":"trace[589194672] transaction","detail":"{read_only:false; response_revision:929; number_of_response:1; }","duration":"198.699453ms","start":"2025-10-18T14:10:06.939770Z","end":"2025-10-18T14:10:07.138470Z","steps":["trace[589194672] 'process raft request'  (duration: 198.442363ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:07.138967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.399978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:07.139800Z","caller":"traceutil/trace.go:172","msg":"trace[127993653] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:929; }","duration":"196.286827ms","start":"2025-10-18T14:10:06.943504Z","end":"2025-10-18T14:10:07.139790Z","steps":["trace[127993653] 'agreement among raft nodes before linearized reading'  (duration: 195.369247ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:09.346180Z","caller":"traceutil/trace.go:172","msg":"trace[1251971802] transaction","detail":"{read_only:false; response_revision:932; number_of_response:1; }","duration":"188.805531ms","start":"2025-10-18T14:10:09.157362Z","end":"2025-10-18T14:10:09.346167Z","steps":["trace[1251971802] 'process raft request'  (duration: 188.697032ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:10.941684Z","caller":"traceutil/trace.go:172","msg":"trace[749719411] transaction","detail":"{read_only:false; response_revision:933; number_of_response:1; }","duration":"142.492535ms","start":"2025-10-18T14:10:10.799179Z","end":"2025-10-18T14:10:10.941672Z","steps":["trace[749719411] 'process raft request'  (duration: 142.249487ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:27.789604Z","caller":"traceutil/trace.go:172","msg":"trace[1796724145] linearizableReadLoop","detail":"{readStateIndex:1010; appliedIndex:1010; }","duration":"117.101433ms","start":"2025-10-18T14:10:27.672412Z","end":"2025-10-18T14:10:27.789513Z","steps":["trace[1796724145] 'read index received'  (duration: 117.095017ms)","trace[1796724145] 'applied index is now lower than readState.Index'  (duration: 5.183µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:10:27.789790Z","caller":"traceutil/trace.go:172","msg":"trace[1019503945] transaction","detail":"{read_only:false; response_revision:980; number_of_response:1; }","duration":"291.472583ms","start":"2025-10-18T14:10:27.498307Z","end":"2025-10-18T14:10:27.789779Z","steps":["trace[1019503945] 'process raft request'  (duration: 291.315936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.789826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.361325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.789858Z","caller":"traceutil/trace.go:172","msg":"trace[1466024528] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:979; }","duration":"117.444796ms","start":"2025-10-18T14:10:27.672405Z","end":"2025-10-18T14:10:27.789850Z","steps":["trace[1466024528] 'agreement among raft nodes before linearized reading'  (duration: 117.307687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.790385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.236345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.790510Z","caller":"traceutil/trace.go:172","msg":"trace[732980754] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:980; }","duration":"108.373321ms","start":"2025-10-18T14:10:27.682130Z","end":"2025-10-18T14:10:27.790503Z","steps":["trace[732980754] 'agreement among raft nodes before linearized reading'  (duration: 108.1351ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:31.360128Z","caller":"traceutil/trace.go:172","msg":"trace[1845619058] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"140.456007ms","start":"2025-10-18T14:10:31.219657Z","end":"2025-10-18T14:10:31.360113Z","steps":["trace[1845619058] 'process raft request'  (duration: 140.331758ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:46.208681Z","caller":"traceutil/trace.go:172","msg":"trace[1766959808] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"186.674963ms","start":"2025-10-18T14:10:46.021984Z","end":"2025-10-18T14:10:46.208659Z","steps":["trace[1766959808] 'process raft request'  (duration: 186.50291ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:11:02.952579Z","caller":"traceutil/trace.go:172","msg":"trace[1731516554] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"113.28639ms","start":"2025-10-18T14:11:02.839276Z","end":"2025-10-18T14:11:02.952562Z","steps":["trace[1731516554] 'read index received'  (duration: 113.240159ms)","trace[1731516554] 'applied index is now lower than readState.Index'  (duration: 45.276µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:11:02.953674Z","caller":"traceutil/trace.go:172","msg":"trace[374499777] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"131.03911ms","start":"2025-10-18T14:11:02.822625Z","end":"2025-10-18T14:11:02.953664Z","steps":["trace[374499777] 'process raft request'  (duration: 130.864849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:11:02.953956Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.682576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:11:02.958891Z","caller":"traceutil/trace.go:172","msg":"trace[2098939205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1198; }","duration":"119.626167ms","start":"2025-10-18T14:11:02.839251Z","end":"2025-10-18T14:11:02.958878Z","steps":["trace[2098939205] 'agreement among raft nodes before linearized reading'  (duration: 114.665108ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:12:30 up 3 min,  0 users,  load average: 1.58, 1.81, 0.82
	Linux addons-891059 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab] <==
	W1018 14:09:53.440835       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 14:09:53.453446       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 14:09:53.493977       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:09:53.500603       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:10:34.174347       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.174816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 14:10:34.174931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 14:10:34.177190       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.177355       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:34.177368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 14:10:41.344292       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.140.151:443: connect: connection refused" logger="UnhandledError"
	W1018 14:10:41.345235       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:41.349441       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:41.403792       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:11:09.006479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51796: use of closed network connection
	E1018 14:11:09.215206       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51814: use of closed network connection
	I1018 14:11:36.964050       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:11:37.174177       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.128.177"}
	I1018 14:11:42.373806       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad] <==
	I1018 14:09:23.460247       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 14:09:23.461755       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 14:09:23.462051       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:09:23.462733       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 14:09:23.462816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 14:09:23.464420       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:09:23.465969       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:09:23.466053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.467317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:23.471785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:09:23.473104       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:09:23.507962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.507980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:09:23.507988       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1018 14:09:32.271939       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:09:53.430333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:09:53.430686       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:09:53.430794       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:09:53.479595       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:09:53.486163       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:09:53.531732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:53.587475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:10:23.541245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:10:23.598329       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:11:22.617268       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	
	
	==> kube-proxy [97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881] <==
	I1018 14:09:29.078784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:09:29.179875       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:09:29.180064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1018 14:09:29.180168       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:09:29.435752       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:09:29.435855       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:09:29.435886       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:09:29.458405       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:09:29.459486       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:09:29.459499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:09:29.471972       1 config.go:200] "Starting service config controller"
	I1018 14:09:29.472688       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:09:29.472718       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:09:29.472724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:09:29.472739       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:09:29.472745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:09:29.474046       1 config.go:309] "Starting node config controller"
	I1018 14:09:29.474055       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:09:29.474060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:09:29.573160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:09:29.573457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:09:29.573493       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6] <==
	E1018 14:09:16.517030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:16.517067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:09:16.517111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:16.517151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:09:16.517190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:16.517227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:16.517305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:16.517334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:16.517377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:16.517437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:16.524951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:09:17.315107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:09:17.350735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:09:17.351152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:17.351207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:17.375382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:17.392110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:17.451119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:17.490015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:17.582674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:09:17.653362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:09:17.692474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:17.761718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:17.762010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 14:09:18.995741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:11:33 addons-891059 kubelet[1503]: I1018 14:11:33.483506    1503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d1a687-8b62-4e3f-be5e-9664ae7f101e" path="/var/lib/kubelet/pods/23d1a687-8b62-4e3f-be5e-9664ae7f101e/volumes"
	Oct 18 14:11:35 addons-891059 kubelet[1503]: E1018 14:11:35.261949    1503 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 14:11:35 addons-891059 kubelet[1503]: E1018 14:11:35.262050    1503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d9e015-f26a-4270-8187-b8312c331504-gcr-creds podName:55d9e015-f26a-4270-8187-b8312c331504 nodeName:}" failed. No retries permitted until 2025-10-18 14:13:37.262034795 +0000 UTC m=+257.939576846 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/55d9e015-f26a-4270-8187-b8312c331504-gcr-creds") pod "registry-creds-764b6fb674-sg8jp" (UID: "55d9e015-f26a-4270-8187-b8312c331504") : secret "registry-creds-gcr" not found
	Oct 18 14:11:37 addons-891059 kubelet[1503]: I1018 14:11:37.178473    1503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrm2j\" (UniqueName: \"kubernetes.io/projected/3922f28b-1c3b-4a38-b461-c5f57823b438-kube-api-access-lrm2j\") pod \"nginx\" (UID: \"3922f28b-1c3b-4a38-b461-c5f57823b438\") " pod="default/nginx"
	Oct 18 14:11:39 addons-891059 kubelet[1503]: E1018 14:11:39.930464    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796699930012310  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:11:39 addons-891059 kubelet[1503]: E1018 14:11:39.930515    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796699930012310  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:11:43 addons-891059 kubelet[1503]: I1018 14:11:43.472770    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tmmvd" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:11:49 addons-891059 kubelet[1503]: E1018 14:11:49.935391    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796709934379130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:11:49 addons-891059 kubelet[1503]: E1018 14:11:49.935413    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796709934379130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:11:59 addons-891059 kubelet[1503]: E1018 14:11:59.940168    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796719939656956  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:11:59 addons-891059 kubelet[1503]: E1018 14:11:59.940476    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796719939656956  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:09 addons-891059 kubelet[1503]: E1018 14:12:09.942382    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796729941990134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:09 addons-891059 kubelet[1503]: E1018 14:12:09.942460    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796729941990134  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:19 addons-891059 kubelet[1503]: E1018 14:12:19.945757    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796739944157267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:19 addons-891059 kubelet[1503]: E1018 14:12:19.945859    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796739944157267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:20 addons-891059 kubelet[1503]: I1018 14:12:20.722802    1503 scope.go:117] "RemoveContainer" containerID="c9926fd0065ccdcd866dddac431d130d85ac2cb394daf125d0556cacd0a0b227"
	Oct 18 14:12:24 addons-891059 kubelet[1503]: I1018 14:12:24.473066    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5z8tb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:12:25 addons-891059 kubelet[1503]: E1018 14:12:25.540236    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 14:12:25 addons-891059 kubelet[1503]: E1018 14:12:25.540317    1503 kuberuntime_image.go:43] "Failed to pull image" err="initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 14:12:25 addons-891059 kubelet[1503]: E1018 14:12:25.540596    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(d6bcb3d3-06c5-4ec8-8496-cf302660e01d): ErrImagePull: initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:12:25 addons-891059 kubelet[1503]: E1018 14:12:25.540674    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:12:26 addons-891059 kubelet[1503]: E1018 14:12:26.133023    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:12:29 addons-891059 kubelet[1503]: E1018 14:12:29.948950    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796749947501504  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:29 addons-891059 kubelet[1503]: E1018 14:12:29.948973    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796749947501504  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:497776}  inodes_used:{value:176}}"
	Oct 18 14:12:30 addons-891059 kubelet[1503]: I1018 14:12:30.474223    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c5cbb" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504] <==
	W1018 14:12:04.750516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:06.755044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:06.761204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:08.765683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:08.770758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:10.774385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:10.780707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:12.784717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:12.793442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:14.797313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:14.803781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:16.807478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:16.813252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:18.817873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:18.828607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:20.831924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:20.837468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:22.841792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:22.847029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:24.850785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:24.856121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:26.860044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:26.866754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:28.878477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:12:28.893271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
helpers_test.go:269: (dbg) Run:  kubectl --context addons-891059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx registry-test task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5 registry-creds-764b6fb674-sg8jp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-891059 describe pod nginx registry-test task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5 registry-creds-764b6fb674-sg8jp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-891059 describe pod nginx registry-test task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5 registry-creds-764b6fb674-sg8jp: exit status 1 (104.069231ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrm2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrm2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  54s   default-scheduler  Successfully assigned default/nginx to addons-891059
	  Normal  Pulling    54s   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-891059/192.168.39.100
	Start Time:                Sat, 18 Oct 2025 14:11:28 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        
	IPs:                       <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-92w8d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-92w8d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  63s   default-scheduler  Successfully assigned default/registry-test to addons-891059
	  Normal  Pulling    62s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48qc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-48qc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  64s   default-scheduler  Successfully assigned default/task-pv-pod to addons-891059
	  Normal  Pulling    64s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:23 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2cp2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2cp2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  68s   default-scheduler  Successfully assigned default/test-local-path to addons-891059
	  Normal   Pulling    67s   kubelet            Pulling image "busybox:stable"
	  Warning  Failed     6s    kubelet            Failed to pull image "busybox:stable": initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6s    kubelet            Error: ErrImagePull
	  Normal   BackOff    5s    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     5s    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nbrm2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lz2l5" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sg8jp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-891059 describe pod nginx registry-test task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5 registry-creds-764b6fb674-sg8jp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (74.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-891059 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-891059 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-891059 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3922f28b-1c3b-4a38-b461-c5f57823b438] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-18 14:19:37.481661139 +0000 UTC m=+671.217018173
addons_test.go:252: (dbg) Run:  kubectl --context addons-891059 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-891059 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-891059/192.168.39.100
Start Time:       Sat, 18 Oct 2025 14:11:37 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrm2j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lrm2j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m                   default-scheduler  Successfully assigned default/nginx to addons-891059
Normal   Pulling    103s (x4 over 8m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     46s (x4 over 6m10s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     46s (x4 over 6m10s)  kubelet            Error: ErrImagePull
Normal   BackOff    1s (x8 over 6m9s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     1s (x8 over 6m9s)    kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-891059 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-891059 logs nginx -n default: exit status 1 (77.691438ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-891059 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-891059 -n addons-891059
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 logs -n 25: (1.478976528s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ --download-only -p binary-mirror-305392 --alsologtostderr --binary-mirror http://127.0.0.1:39643 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ -p binary-mirror-305392                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ addons  │ enable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ start   │ -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ ip      │ addons-891059 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ enable headlamp -p addons-891059 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │ 18 Oct 25 14:14 UTC │
	│ addons  │ addons-891059 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │ 18 Oct 25 14:15 UTC │
	│ addons  │ addons-891059 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:17 UTC │ 18 Oct 25 14:17 UTC │
	│ addons  │ addons-891059 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:17 UTC │ 18 Oct 25 14:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:38.383524 1760410 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:38.383797 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383806 1760410 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:38.383810 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383984 1760410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:08:38.384564 1760410 out.go:368] Setting JSON to false
	I1018 14:08:38.385550 1760410 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21066,"bootTime":1760775452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:38.385650 1760410 start.go:141] virtualization: kvm guest
	I1018 14:08:38.387370 1760410 out.go:179] * [addons-891059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:08:38.388598 1760410 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:08:38.388649 1760410 notify.go:220] Checking for updates...
	I1018 14:08:38.390750 1760410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:38.391832 1760410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:38.392857 1760410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:38.393954 1760410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:08:38.395387 1760410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:08:38.397030 1760410 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:08:38.428089 1760410 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 14:08:38.429204 1760410 start.go:305] selected driver: kvm2
	I1018 14:08:38.429233 1760410 start.go:925] validating driver "kvm2" against <nil>
	I1018 14:08:38.429248 1760410 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:08:38.429988 1760410 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.430081 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.444435 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.444496 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.459956 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.460007 1760410 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:08:38.460292 1760410 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:08:38.460324 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:08:38.460395 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:08:38.460407 1760410 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 14:08:38.460458 1760410 start.go:349] cluster config:
	{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 14:08:38.460561 1760410 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.462275 1760410 out.go:179] * Starting "addons-891059" primary control-plane node in "addons-891059" cluster
	I1018 14:08:38.463616 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:08:38.463663 1760410 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:08:38.463679 1760410 cache.go:58] Caching tarball of preloaded images
	I1018 14:08:38.463782 1760410 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:08:38.463797 1760410 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:08:38.464313 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:38.464364 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json: {Name:mk7320464dda7a1239a5641208a2baa2eb0aeb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:08:38.464529 1760410 start.go:360] acquireMachinesLock for addons-891059: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 14:08:38.464580 1760410 start.go:364] duration metric: took 35.666µs to acquireMachinesLock for "addons-891059"
	I1018 14:08:38.464596 1760410 start.go:93] Provisioning new machine with config: &{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:08:38.464647 1760410 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 14:08:38.467259 1760410 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 14:08:38.467474 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:08:38.467524 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:08:38.481384 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1018 14:08:38.481876 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:08:38.482458 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:08:38.482488 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:08:38.482906 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:08:38.483171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:38.483408 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:38.483601 1760410 start.go:159] libmachine.API.Create for "addons-891059" (driver="kvm2")
	I1018 14:08:38.483638 1760410 client.go:168] LocalClient.Create starting
	I1018 14:08:38.483679 1760410 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem
	I1018 14:08:38.745193 1760410 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem
	I1018 14:08:39.239522 1760410 main.go:141] libmachine: Running pre-create checks...
	I1018 14:08:39.239552 1760410 main.go:141] libmachine: (addons-891059) Calling .PreCreateCheck
	I1018 14:08:39.240096 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:39.240581 1760410 main.go:141] libmachine: Creating machine...
	I1018 14:08:39.240598 1760410 main.go:141] libmachine: (addons-891059) Calling .Create
	I1018 14:08:39.240735 1760410 main.go:141] libmachine: (addons-891059) creating domain...
	I1018 14:08:39.240756 1760410 main.go:141] libmachine: (addons-891059) creating network...
	I1018 14:08:39.242180 1760410 main.go:141] libmachine: (addons-891059) DBG | found existing default network
	I1018 14:08:39.242394 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.242421 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>default</name>
	I1018 14:08:39.242432 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 14:08:39.242439 1760410 main.go:141] libmachine: (addons-891059) DBG |   <forward mode='nat'>
	I1018 14:08:39.242474 1760410 main.go:141] libmachine: (addons-891059) DBG |     <nat>
	I1018 14:08:39.242495 1760410 main.go:141] libmachine: (addons-891059) DBG |       <port start='1024' end='65535'/>
	I1018 14:08:39.242573 1760410 main.go:141] libmachine: (addons-891059) DBG |     </nat>
	I1018 14:08:39.242596 1760410 main.go:141] libmachine: (addons-891059) DBG |   </forward>
	I1018 14:08:39.242607 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 14:08:39.242619 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 14:08:39.242634 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 14:08:39.242645 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.242658 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 14:08:39.242666 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.242673 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.242680 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.242694 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243130 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.242976 1760437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123570}
	I1018 14:08:39.243178 1760410 main.go:141] libmachine: (addons-891059) DBG | defining private network:
	I1018 14:08:39.243193 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243204 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.243216 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.243222 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.243227 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.243234 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.243239 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.243245 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.243249 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.243263 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.243270 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.248946 1760410 main.go:141] libmachine: (addons-891059) DBG | creating private network mk-addons-891059 192.168.39.0/24...
	I1018 14:08:39.319941 1760410 main.go:141] libmachine: (addons-891059) DBG | private network mk-addons-891059 192.168.39.0/24 created
	I1018 14:08:39.320210 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.320231 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.320247 1760410 main.go:141] libmachine: (addons-891059) setting up store path in /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.320262 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>3e7dc5ca-8c6a-4f5a-8f08-752a5d85d27d</uuid>
	I1018 14:08:39.320883 1760410 main.go:141] libmachine: (addons-891059) building disk image from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 14:08:39.320919 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 14:08:39.320937 1760410 main.go:141] libmachine: (addons-891059) Downloading /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 14:08:39.320964 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:80:09:dc'/>
	I1018 14:08:39.320974 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.320985 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.320997 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.321006 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.321013 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.321038 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.321045 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.321061 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.321072 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.320218 1760437 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.610846 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.610682 1760437 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa...
	I1018 14:08:39.691572 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691412 1760437 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk...
	I1018 14:08:39.691603 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing magic tar header
	I1018 14:08:39.691616 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing SSH key tar header
	I1018 14:08:39.691625 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691531 1760437 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.691639 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059
	I1018 14:08:39.691766 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 (perms=drwx------)
	I1018 14:08:39.691804 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines (perms=drwxr-xr-x)
	I1018 14:08:39.691812 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines
	I1018 14:08:39.691822 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.691828 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824
	I1018 14:08:39.691835 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 14:08:39.691839 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins
	I1018 14:08:39.691848 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home
	I1018 14:08:39.691853 1760410 main.go:141] libmachine: (addons-891059) DBG | skipping /home - not owner
	I1018 14:08:39.691897 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube (perms=drwxr-xr-x)
	I1018 14:08:39.691923 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824 (perms=drwxrwxr-x)
	I1018 14:08:39.691940 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 14:08:39.691998 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 14:08:39.692026 1760410 main.go:141] libmachine: (addons-891059) defining domain...
	I1018 14:08:39.693006 1760410 main.go:141] libmachine: (addons-891059) defining domain using XML: 
	I1018 14:08:39.693019 1760410 main.go:141] libmachine: (addons-891059) <domain type='kvm'>
	I1018 14:08:39.693025 1760410 main.go:141] libmachine: (addons-891059)   <name>addons-891059</name>
	I1018 14:08:39.693030 1760410 main.go:141] libmachine: (addons-891059)   <memory unit='MiB'>4096</memory>
	I1018 14:08:39.693036 1760410 main.go:141] libmachine: (addons-891059)   <vcpu>2</vcpu>
	I1018 14:08:39.693040 1760410 main.go:141] libmachine: (addons-891059)   <features>
	I1018 14:08:39.693046 1760410 main.go:141] libmachine: (addons-891059)     <acpi/>
	I1018 14:08:39.693053 1760410 main.go:141] libmachine: (addons-891059)     <apic/>
	I1018 14:08:39.693058 1760410 main.go:141] libmachine: (addons-891059)     <pae/>
	I1018 14:08:39.693064 1760410 main.go:141] libmachine: (addons-891059)   </features>
	I1018 14:08:39.693069 1760410 main.go:141] libmachine: (addons-891059)   <cpu mode='host-passthrough'>
	I1018 14:08:39.693074 1760410 main.go:141] libmachine: (addons-891059)   </cpu>
	I1018 14:08:39.693078 1760410 main.go:141] libmachine: (addons-891059)   <os>
	I1018 14:08:39.693085 1760410 main.go:141] libmachine: (addons-891059)     <type>hvm</type>
	I1018 14:08:39.693090 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='cdrom'/>
	I1018 14:08:39.693095 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='hd'/>
	I1018 14:08:39.693100 1760410 main.go:141] libmachine: (addons-891059)     <bootmenu enable='no'/>
	I1018 14:08:39.693104 1760410 main.go:141] libmachine: (addons-891059)   </os>
	I1018 14:08:39.693134 1760410 main.go:141] libmachine: (addons-891059)   <devices>
	I1018 14:08:39.693159 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='cdrom'>
	I1018 14:08:39.693176 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.693184 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.693194 1760410 main.go:141] libmachine: (addons-891059)       <readonly/>
	I1018 14:08:39.693202 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693215 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='disk'>
	I1018 14:08:39.693225 1760410 main.go:141] libmachine: (addons-891059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 14:08:39.693242 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.693252 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.693259 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693271 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693281 1760410 main.go:141] libmachine: (addons-891059)       <source network='mk-addons-891059'/>
	I1018 14:08:39.693293 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693303 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693324 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693354 1760410 main.go:141] libmachine: (addons-891059)       <source network='default'/>
	I1018 14:08:39.693363 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693367 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693373 1760410 main.go:141] libmachine: (addons-891059)     <serial type='pty'>
	I1018 14:08:39.693396 1760410 main.go:141] libmachine: (addons-891059)       <target port='0'/>
	I1018 14:08:39.693404 1760410 main.go:141] libmachine: (addons-891059)     </serial>
	I1018 14:08:39.693408 1760410 main.go:141] libmachine: (addons-891059)     <console type='pty'>
	I1018 14:08:39.693416 1760410 main.go:141] libmachine: (addons-891059)       <target type='serial' port='0'/>
	I1018 14:08:39.693426 1760410 main.go:141] libmachine: (addons-891059)     </console>
	I1018 14:08:39.693446 1760410 main.go:141] libmachine: (addons-891059)     <rng model='virtio'>
	I1018 14:08:39.693467 1760410 main.go:141] libmachine: (addons-891059)       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.693482 1760410 main.go:141] libmachine: (addons-891059)     </rng>
	I1018 14:08:39.693492 1760410 main.go:141] libmachine: (addons-891059)   </devices>
	I1018 14:08:39.693501 1760410 main.go:141] libmachine: (addons-891059) </domain>
	I1018 14:08:39.693506 1760410 main.go:141] libmachine: (addons-891059) 
	I1018 14:08:39.706650 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:f4:cf:b8 in network default
	I1018 14:08:39.707254 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:39.707274 1760410 main.go:141] libmachine: (addons-891059) starting domain...
	I1018 14:08:39.707286 1760410 main.go:141] libmachine: (addons-891059) ensuring networks are active...
	I1018 14:08:39.707989 1760410 main.go:141] libmachine: (addons-891059) Ensuring network default is active
	I1018 14:08:39.708292 1760410 main.go:141] libmachine: (addons-891059) Ensuring network mk-addons-891059 is active
	I1018 14:08:39.708895 1760410 main.go:141] libmachine: (addons-891059) getting domain XML...
	I1018 14:08:39.709831 1760410 main.go:141] libmachine: (addons-891059) DBG | starting domain XML:
	I1018 14:08:39.709853 1760410 main.go:141] libmachine: (addons-891059) DBG | <domain type='kvm'>
	I1018 14:08:39.709867 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>addons-891059</name>
	I1018 14:08:39.709876 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>372d9231-4fa4-4480-95fc-5052e6676096</uuid>
	I1018 14:08:39.709886 1760410 main.go:141] libmachine: (addons-891059) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 14:08:39.709894 1760410 main.go:141] libmachine: (addons-891059) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 14:08:39.709903 1760410 main.go:141] libmachine: (addons-891059) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 14:08:39.709907 1760410 main.go:141] libmachine: (addons-891059) DBG |   <os>
	I1018 14:08:39.709920 1760410 main.go:141] libmachine: (addons-891059) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 14:08:39.709930 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='cdrom'/>
	I1018 14:08:39.709943 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='hd'/>
	I1018 14:08:39.709954 1760410 main.go:141] libmachine: (addons-891059) DBG |     <bootmenu enable='no'/>
	I1018 14:08:39.709988 1760410 main.go:141] libmachine: (addons-891059) DBG |   </os>
	I1018 14:08:39.710010 1760410 main.go:141] libmachine: (addons-891059) DBG |   <features>
	I1018 14:08:39.710020 1760410 main.go:141] libmachine: (addons-891059) DBG |     <acpi/>
	I1018 14:08:39.710028 1760410 main.go:141] libmachine: (addons-891059) DBG |     <apic/>
	I1018 14:08:39.710042 1760410 main.go:141] libmachine: (addons-891059) DBG |     <pae/>
	I1018 14:08:39.710052 1760410 main.go:141] libmachine: (addons-891059) DBG |   </features>
	I1018 14:08:39.710065 1760410 main.go:141] libmachine: (addons-891059) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 14:08:39.710080 1760410 main.go:141] libmachine: (addons-891059) DBG |   <clock offset='utc'/>
	I1018 14:08:39.710094 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 14:08:39.710106 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_reboot>restart</on_reboot>
	I1018 14:08:39.710116 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_crash>destroy</on_crash>
	I1018 14:08:39.710124 1760410 main.go:141] libmachine: (addons-891059) DBG |   <devices>
	I1018 14:08:39.710141 1760410 main.go:141] libmachine: (addons-891059) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 14:08:39.710157 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='cdrom'>
	I1018 14:08:39.710174 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw'/>
	I1018 14:08:39.710189 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.710202 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.710213 1760410 main.go:141] libmachine: (addons-891059) DBG |       <readonly/>
	I1018 14:08:39.710241 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 14:08:39.710261 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710268 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='disk'>
	I1018 14:08:39.710278 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 14:08:39.710289 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.710297 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.710304 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 14:08:39.710311 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710317 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 14:08:39.710325 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 14:08:39.710331 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710338 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 14:08:39.710353 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 14:08:39.710359 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 14:08:39.710375 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710394 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710417 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:12:2f:9d'/>
	I1018 14:08:39.710440 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='mk-addons-891059'/>
	I1018 14:08:39.710448 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710453 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 14:08:39.710459 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710463 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710469 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:f4:cf:b8'/>
	I1018 14:08:39.710473 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='default'/>
	I1018 14:08:39.710478 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710499 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 14:08:39.710511 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710529 1760410 main.go:141] libmachine: (addons-891059) DBG |     <serial type='pty'>
	I1018 14:08:39.710546 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='isa-serial' port='0'>
	I1018 14:08:39.710558 1760410 main.go:141] libmachine: (addons-891059) DBG |         <model name='isa-serial'/>
	I1018 14:08:39.710568 1760410 main.go:141] libmachine: (addons-891059) DBG |       </target>
	I1018 14:08:39.710575 1760410 main.go:141] libmachine: (addons-891059) DBG |     </serial>
	I1018 14:08:39.710584 1760410 main.go:141] libmachine: (addons-891059) DBG |     <console type='pty'>
	I1018 14:08:39.710590 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='serial' port='0'/>
	I1018 14:08:39.710597 1760410 main.go:141] libmachine: (addons-891059) DBG |     </console>
	I1018 14:08:39.710602 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='mouse' bus='ps2'/>
	I1018 14:08:39.710611 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 14:08:39.710619 1760410 main.go:141] libmachine: (addons-891059) DBG |     <audio id='1' type='none'/>
	I1018 14:08:39.710635 1760410 main.go:141] libmachine: (addons-891059) DBG |     <memballoon model='virtio'>
	I1018 14:08:39.710650 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 14:08:39.710670 1760410 main.go:141] libmachine: (addons-891059) DBG |     </memballoon>
	I1018 14:08:39.710681 1760410 main.go:141] libmachine: (addons-891059) DBG |     <rng model='virtio'>
	I1018 14:08:39.710688 1760410 main.go:141] libmachine: (addons-891059) DBG |       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.710700 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 14:08:39.710714 1760410 main.go:141] libmachine: (addons-891059) DBG |     </rng>
	I1018 14:08:39.710725 1760410 main.go:141] libmachine: (addons-891059) DBG |   </devices>
	I1018 14:08:39.710731 1760410 main.go:141] libmachine: (addons-891059) DBG | </domain>
	I1018 14:08:39.710744 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:41.127813 1760410 main.go:141] libmachine: (addons-891059) waiting for domain to start...
	I1018 14:08:41.129181 1760410 main.go:141] libmachine: (addons-891059) domain is now running
	I1018 14:08:41.129199 1760410 main.go:141] libmachine: (addons-891059) waiting for IP...
	I1018 14:08:41.130215 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.130734 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.130765 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.131111 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.131182 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.131117 1760437 retry.go:31] will retry after 310.436274ms: waiting for domain to come up
	I1018 14:08:41.443955 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.444643 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.444667 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.444959 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.445013 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.444938 1760437 retry.go:31] will retry after 310.095624ms: waiting for domain to come up
	I1018 14:08:41.756412 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.756912 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.756985 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.757237 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.757264 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.757211 1760437 retry.go:31] will retry after 403.034899ms: waiting for domain to come up
	I1018 14:08:42.161632 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.162259 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.162290 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.162631 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.162653 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.162588 1760437 retry.go:31] will retry after 392.033324ms: waiting for domain to come up
	I1018 14:08:42.555954 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.556467 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.556490 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.556794 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.556833 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.556772 1760437 retry.go:31] will retry after 563.122226ms: waiting for domain to come up
	I1018 14:08:43.121698 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.122213 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.122240 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.122649 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.122673 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.122588 1760437 retry.go:31] will retry after 654.00858ms: waiting for domain to come up
	I1018 14:08:43.778430 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.778988 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.779017 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.779284 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.779359 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.779296 1760437 retry.go:31] will retry after 861.369309ms: waiting for domain to come up
	I1018 14:08:44.642386 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:44.642972 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:44.643001 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:44.643258 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:44.643325 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:44.643266 1760437 retry.go:31] will retry after 1.120629341s: waiting for domain to come up
	I1018 14:08:45.765704 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:45.766202 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:45.766225 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:45.766596 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:45.766622 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:45.766568 1760437 retry.go:31] will retry after 1.280814413s: waiting for domain to come up
	I1018 14:08:47.049323 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:47.049871 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:47.049898 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:47.050228 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:47.050287 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:47.050222 1760437 retry.go:31] will retry after 2.205238568s: waiting for domain to come up
	I1018 14:08:49.257773 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:49.258389 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:49.258419 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:49.258809 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:49.258836 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:49.258779 1760437 retry.go:31] will retry after 2.31868491s: waiting for domain to come up
	I1018 14:08:51.580165 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:51.580745 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:51.580775 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:51.581147 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:51.581179 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:51.581113 1760437 retry.go:31] will retry after 2.275257905s: waiting for domain to come up
	I1018 14:08:53.858516 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:53.859085 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:53.859110 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:53.859415 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:53.859447 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:53.859390 1760437 retry.go:31] will retry after 3.968512343s: waiting for domain to come up
	I1018 14:08:57.829253 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829924 1760410 main.go:141] libmachine: (addons-891059) found domain IP: 192.168.39.100
	I1018 14:08:57.829948 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has current primary IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829954 1760410 main.go:141] libmachine: (addons-891059) reserving static IP address...
	I1018 14:08:57.830357 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find host DHCP lease matching {name: "addons-891059", mac: "52:54:00:12:2f:9d", ip: "192.168.39.100"} in network mk-addons-891059
	I1018 14:08:58.036271 1760410 main.go:141] libmachine: (addons-891059) DBG | Getting to WaitForSSH function...
	I1018 14:08:58.036306 1760410 main.go:141] libmachine: (addons-891059) reserved static IP address 192.168.39.100 for domain addons-891059
	I1018 14:08:58.036334 1760410 main.go:141] libmachine: (addons-891059) waiting for SSH...
	I1018 14:08:58.039556 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040071 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.040113 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040427 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH client type: external
	I1018 14:08:58.040457 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa (-rw-------)
	I1018 14:08:58.040489 1760410 main.go:141] libmachine: (addons-891059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 14:08:58.040505 1760410 main.go:141] libmachine: (addons-891059) DBG | About to run SSH command:
	I1018 14:08:58.040518 1760410 main.go:141] libmachine: (addons-891059) DBG | exit 0
	I1018 14:08:58.178221 1760410 main.go:141] libmachine: (addons-891059) DBG | SSH cmd err, output: <nil>: 
	I1018 14:08:58.178611 1760410 main.go:141] libmachine: (addons-891059) domain creation complete
	I1018 14:08:58.178979 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:58.179725 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.179914 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.180097 1760410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 14:08:58.180117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:08:58.181922 1760410 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 14:08:58.181937 1760410 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 14:08:58.181946 1760410 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 14:08:58.181953 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.184676 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185179 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.185207 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185454 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.185640 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185815 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185930 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.186116 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.186465 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.186483 1760410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 14:08:58.305360 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.305387 1760410 main.go:141] libmachine: Detecting the provisioner...
	I1018 14:08:58.305399 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.308732 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309086 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.309110 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309407 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.309679 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.309898 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.310049 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.310245 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.310526 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.310542 1760410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 14:08:58.429225 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 14:08:58.429329 1760410 main.go:141] libmachine: found compatible host: buildroot
	I1018 14:08:58.429364 1760410 main.go:141] libmachine: Provisioning with buildroot...
	I1018 14:08:58.429383 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429696 1760410 buildroot.go:166] provisioning hostname "addons-891059"
	I1018 14:08:58.429732 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429974 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.433221 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433619 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.433638 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433891 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.434117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434290 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434435 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.434615 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.434828 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.434841 1760410 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-891059 && echo "addons-891059" | sudo tee /etc/hostname
	I1018 14:08:58.571164 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-891059
	
	I1018 14:08:58.571201 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.574587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575023 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.575060 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575255 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.575484 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575818 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.576059 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.576292 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.576310 1760410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-891059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-891059/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-891059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:08:58.705558 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.705593 1760410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 14:08:58.705650 1760410 buildroot.go:174] setting up certificates
	I1018 14:08:58.705677 1760410 provision.go:84] configureAuth start
	I1018 14:08:58.705691 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.706037 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:58.709084 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709428 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.709454 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709701 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.712025 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712527 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.712572 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712679 1760410 provision.go:143] copyHostCerts
	I1018 14:08:58.712765 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 14:08:58.712925 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 14:08:58.713027 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 14:08:58.713099 1760410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.addons-891059 san=[127.0.0.1 192.168.39.100 addons-891059 localhost minikube]
	I1018 14:08:59.195381 1760410 provision.go:177] copyRemoteCerts
	I1018 14:08:59.195454 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:08:59.195481 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.198489 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.198846 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.198881 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.199059 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.199299 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.199483 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.199691 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.292928 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:08:59.325386 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:08:59.357335 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:08:59.389117 1760410 provision.go:87] duration metric: took 683.421516ms to configureAuth
	I1018 14:08:59.389152 1760410 buildroot.go:189] setting minikube options for container-runtime
	I1018 14:08:59.389391 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:08:59.389501 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.392319 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392710 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.392752 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392932 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.393164 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393457 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393687 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.393910 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.394130 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.394146 1760410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:08:59.663506 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:08:59.663540 1760410 main.go:141] libmachine: Checking connection to Docker...
	I1018 14:08:59.663551 1760410 main.go:141] libmachine: (addons-891059) Calling .GetURL
	I1018 14:08:59.665074 1760410 main.go:141] libmachine: (addons-891059) DBG | using libvirt version 8000000
	I1018 14:08:59.668182 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668663 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.668695 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668860 1760410 main.go:141] libmachine: Docker is up and running!
	I1018 14:08:59.668875 1760410 main.go:141] libmachine: Reticulating splines...
	I1018 14:08:59.668883 1760410 client.go:171] duration metric: took 21.185236601s to LocalClient.Create
	I1018 14:08:59.668913 1760410 start.go:167] duration metric: took 21.185315141s to libmachine.API.Create "addons-891059"
	I1018 14:08:59.668930 1760410 start.go:293] postStartSetup for "addons-891059" (driver="kvm2")
	I1018 14:08:59.668947 1760410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:08:59.668967 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.669233 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:08:59.669269 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.671533 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.671957 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.671985 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.672144 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.672364 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.672523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.672667 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.764031 1760410 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:08:59.769115 1760410 info.go:137] Remote host: Buildroot 2025.02
	I1018 14:08:59.769146 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 14:08:59.769224 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 14:08:59.769248 1760410 start.go:296] duration metric: took 100.307576ms for postStartSetup
	I1018 14:08:59.769292 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:59.769961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.773479 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.773901 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.773934 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.774210 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:59.774465 1760410 start.go:128] duration metric: took 21.309794025s to createHost
	I1018 14:08:59.774492 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.777128 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777506 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.777535 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777745 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.777961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.778500 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.778740 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.778756 1760410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 14:08:59.897254 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760796539.858103251
	
	I1018 14:08:59.897279 1760410 fix.go:216] guest clock: 1760796539.858103251
	I1018 14:08:59.897287 1760410 fix.go:229] Guest: 2025-10-18 14:08:59.858103251 +0000 UTC Remote: 2025-10-18 14:08:59.774480854 +0000 UTC m=+21.430607980 (delta=83.622397ms)
	I1018 14:08:59.897336 1760410 fix.go:200] guest clock delta is within tolerance: 83.622397ms
	I1018 14:08:59.897364 1760410 start.go:83] releasing machines lock for "addons-891059", held for 21.432776387s
	I1018 14:08:59.897398 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.897684 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.901076 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901487 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.901521 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901705 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902565 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902783 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902886 1760410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:08:59.902954 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.903079 1760410 ssh_runner.go:195] Run: cat /version.json
	I1018 14:08:59.903102 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.906580 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.906633 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907079 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907125 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907149 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907167 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907386 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907427 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907642 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907647 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907824 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.907846 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.908031 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.908099 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.992932 1760410 ssh_runner.go:195] Run: systemctl --version
	I1018 14:09:00.021820 1760410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:09:00.183446 1760410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:09:00.190803 1760410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:09:00.190911 1760410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:09:00.213058 1760410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:09:00.213091 1760410 start.go:495] detecting cgroup driver to use...
	I1018 14:09:00.213178 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:09:00.233624 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:09:00.252522 1760410 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:09:00.252617 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:09:00.272205 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:09:00.289717 1760410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:09:00.439992 1760410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:09:00.649208 1760410 docker.go:234] disabling docker service ...
	I1018 14:09:00.649292 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:09:00.666373 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:09:00.682992 1760410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:09:00.835422 1760410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:09:00.982700 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:09:00.999428 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:09:01.024799 1760410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:09:01.024906 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.038654 1760410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 14:09:01.038752 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.052374 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.066305 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.080191 1760410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:09:01.094600 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.108084 1760410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.131069 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.144608 1760410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:09:01.156726 1760410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:09:01.156791 1760410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:09:01.180230 1760410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:09:01.193680 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:01.335791 1760410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:09:01.461561 1760410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:09:01.461683 1760410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:09:01.467775 1760410 start.go:563] Will wait 60s for crictl version
	I1018 14:09:01.467870 1760410 ssh_runner.go:195] Run: which crictl
	I1018 14:09:01.472812 1760410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 14:09:01.516410 1760410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 14:09:01.516518 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.548303 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.582529 1760410 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 14:09:01.583814 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:09:01.588147 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588628 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:01.588667 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588973 1760410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 14:09:01.594159 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:01.610280 1760410 kubeadm.go:883] updating cluster {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:09:01.610462 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:09:01.610527 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:01.648777 1760410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 14:09:01.648866 1760410 ssh_runner.go:195] Run: which lz4
	I1018 14:09:01.653595 1760410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 14:09:01.658875 1760410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 14:09:01.658909 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 14:09:03.215465 1760410 crio.go:462] duration metric: took 1.561899205s to copy over tarball
	I1018 14:09:03.215548 1760410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 14:09:04.890701 1760410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.675118935s)
	I1018 14:09:04.890741 1760410 crio.go:469] duration metric: took 1.675237586s to extract the tarball
	I1018 14:09:04.890755 1760410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 14:09:04.933819 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:04.980242 1760410 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:09:04.980269 1760410 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:09:04.980277 1760410 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1018 14:09:04.980412 1760410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-891059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:09:04.980487 1760410 ssh_runner.go:195] Run: crio config
	I1018 14:09:05.031493 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:05.031532 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:05.031561 1760410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:09:05.031594 1760410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-891059 NodeName:addons-891059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:09:05.031791 1760410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-891059"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:09:05.031889 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:09:05.045249 1760410 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:09:05.045322 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:09:05.057594 1760410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1018 14:09:05.079304 1760410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:09:05.101229 1760410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1018 14:09:05.123379 1760410 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1018 14:09:05.128149 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:05.144740 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:05.287867 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:05.310139 1760410 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059 for IP: 192.168.39.100
	I1018 14:09:05.310175 1760410 certs.go:195] generating shared ca certs ...
	I1018 14:09:05.310203 1760410 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.310412 1760410 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 14:09:05.928678 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt ...
	I1018 14:09:05.928717 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt: {Name:mk48305fdb94e31a92b48facef68eec843776b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.928918 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key ...
	I1018 14:09:05.928931 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key: {Name:mk701e118ad43b61f158a839f73ec6b965102354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.929018 1760410 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 14:09:06.043454 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt ...
	I1018 14:09:06.043488 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt: {Name:mk77ddeb4af674721966c75040f4f1fb5d69023d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043679 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key ...
	I1018 14:09:06.043694 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key: {Name:mk65d64f37c13d41fae5e3b77d20098229c0b1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043772 1760410 certs.go:257] generating profile certs ...
	I1018 14:09:06.043835 1760410 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key
	I1018 14:09:06.043862 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt with IP's: []
	I1018 14:09:06.259815 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt ...
	I1018 14:09:06.259852 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: {Name:mk812f759d940b265a8e60c894cb050949fd9e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260037 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key ...
	I1018 14:09:06.260054 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key: {Name:mk50fce6a65f5d969bea0e1a48d418e711ccdfe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260134 1760410 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa
	I1018 14:09:06.260154 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I1018 14:09:06.486406 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa ...
	I1018 14:09:06.486442 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa: {Name:mk13f44e79eaa89077b52da6090b647e00b64732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486629 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa ...
	I1018 14:09:06.486643 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa: {Name:mkbe94bfad32eaf986c1751799d5eb527ff32552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486733 1760410 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt
	I1018 14:09:06.486836 1760410 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key
	I1018 14:09:06.486900 1760410 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key
	I1018 14:09:06.486924 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt with IP's: []
	I1018 14:09:06.798152 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt ...
	I1018 14:09:06.798201 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt: {Name:mk29883864de081c2ef5f64c49afd825bbef9059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798410 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key ...
	I1018 14:09:06.798426 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key: {Name:mk619e894bc6a3076fe0e333221023492d7ff3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798649 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 14:09:06.798690 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:09:06.798715 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:09:06.798735 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 14:09:06.799486 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:09:06.845692 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:09:06.882745 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:09:06.918371 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 14:09:06.952411 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:09:06.985595 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:09:07.018257 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:09:07.051475 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:09:07.086174 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:09:07.118849 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:09:07.141590 1760410 ssh_runner.go:195] Run: openssl version
	I1018 14:09:07.148896 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:09:07.163684 1760410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169573 1760410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169638 1760410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.177781 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:09:07.192577 1760410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:09:07.199705 1760410 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:09:07.199768 1760410 kubeadm.go:400] StartCluster: {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:09:07.199879 1760410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:09:07.199953 1760410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:09:07.241737 1760410 cri.go:89] found id: ""
	I1018 14:09:07.241827 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:09:07.254574 1760410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:09:07.267441 1760410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:09:07.280136 1760410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:09:07.280159 1760410 kubeadm.go:157] found existing configuration files:
	
	I1018 14:09:07.280207 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:09:07.292712 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:09:07.292791 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:09:07.305268 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:09:07.317524 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:09:07.317645 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:09:07.330484 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.342579 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:09:07.342663 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.355673 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:09:07.367952 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:09:07.368036 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:09:07.381331 1760410 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 14:09:07.547925 1760410 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:09:20.098002 1760410 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:09:20.098063 1760410 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:09:20.098145 1760410 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:09:20.098299 1760410 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:09:20.098447 1760410 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:09:20.098529 1760410 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:09:20.100393 1760410 out.go:252]   - Generating certificates and keys ...
	I1018 14:09:20.100495 1760410 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:09:20.100629 1760410 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:09:20.100764 1760410 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:09:20.100857 1760410 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:09:20.100964 1760410 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:09:20.101051 1760410 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:09:20.101129 1760410 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:09:20.101315 1760410 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101405 1760410 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:09:20.101571 1760410 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101672 1760410 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:09:20.101744 1760410 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:09:20.101795 1760410 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:09:20.101843 1760410 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:09:20.101896 1760410 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:09:20.101961 1760410 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:09:20.102011 1760410 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:09:20.102082 1760410 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:09:20.102127 1760410 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:09:20.102199 1760410 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:09:20.102260 1760410 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:09:20.103813 1760410 out.go:252]   - Booting up control plane ...
	I1018 14:09:20.103893 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:09:20.103954 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:09:20.104007 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:09:20.104089 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:09:20.104181 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:09:20.104334 1760410 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:09:20.104446 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:09:20.104482 1760410 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:09:20.104625 1760410 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:09:20.104745 1760410 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:09:20.104820 1760410 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50245312s
	I1018 14:09:20.104902 1760410 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:09:20.104976 1760410 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.100:8443/livez
	I1018 14:09:20.105057 1760410 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:09:20.105126 1760410 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:09:20.105186 1760410 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.213660902s
	I1018 14:09:20.105249 1760410 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.327835251s
	I1018 14:09:20.105309 1760410 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50283692s
	I1018 14:09:20.105410 1760410 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:09:20.105516 1760410 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:09:20.105572 1760410 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:09:20.105752 1760410 kubeadm.go:318] [mark-control-plane] Marking the node addons-891059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:09:20.105817 1760410 kubeadm.go:318] [bootstrap-token] Using token: ci4c4o.8llcllq96muz9osf
	I1018 14:09:20.108036 1760410 out.go:252]   - Configuring RBAC rules ...
	I1018 14:09:20.108126 1760410 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:09:20.108210 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:09:20.108332 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:09:20.108465 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:09:20.108571 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:09:20.108668 1760410 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:09:20.108821 1760410 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:09:20.108863 1760410 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:09:20.108900 1760410 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:09:20.108911 1760410 kubeadm.go:318] 
	I1018 14:09:20.108961 1760410 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:09:20.108967 1760410 kubeadm.go:318] 
	I1018 14:09:20.109026 1760410 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:09:20.109031 1760410 kubeadm.go:318] 
	I1018 14:09:20.109051 1760410 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:09:20.109098 1760410 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:09:20.109140 1760410 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:09:20.109146 1760410 kubeadm.go:318] 
	I1018 14:09:20.109214 1760410 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:09:20.109221 1760410 kubeadm.go:318] 
	I1018 14:09:20.109258 1760410 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:09:20.109264 1760410 kubeadm.go:318] 
	I1018 14:09:20.109311 1760410 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:09:20.109381 1760410 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:09:20.109469 1760410 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:09:20.109488 1760410 kubeadm.go:318] 
	I1018 14:09:20.109554 1760410 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:09:20.109622 1760410 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:09:20.109628 1760410 kubeadm.go:318] 
	I1018 14:09:20.109698 1760410 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.109796 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 \
	I1018 14:09:20.109908 1760410 kubeadm.go:318] 	--control-plane 
	I1018 14:09:20.109934 1760410 kubeadm.go:318] 
	I1018 14:09:20.110067 1760410 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:09:20.110077 1760410 kubeadm.go:318] 
	I1018 14:09:20.110176 1760410 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.110279 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 
	I1018 14:09:20.110293 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:20.110301 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:20.111886 1760410 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 14:09:20.113016 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 14:09:20.127933 1760410 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 14:09:20.158289 1760410 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:09:20.158398 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.158416 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-891059 minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-891059 minikube.k8s.io/primary=true
	I1018 14:09:20.315678 1760410 ops.go:34] apiserver oom_adj: -16
	I1018 14:09:20.315834 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.816073 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.316085 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.816909 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.316182 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.816708 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.316221 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.816476 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.316683 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.414532 1760410 kubeadm.go:1113] duration metric: took 4.256222081s to wait for elevateKubeSystemPrivileges
	I1018 14:09:24.414583 1760410 kubeadm.go:402] duration metric: took 17.214819054s to StartCluster
	I1018 14:09:24.414614 1760410 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.414803 1760410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:09:24.415376 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.415641 1760410 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:09:24.415700 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:09:24.415754 1760410 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:09:24.415887 1760410 addons.go:69] Setting yakd=true in profile "addons-891059"
	I1018 14:09:24.415896 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.415930 1760410 addons.go:238] Setting addon yakd=true in "addons-891059"
	I1018 14:09:24.415941 1760410 addons.go:69] Setting registry-creds=true in profile "addons-891059"
	I1018 14:09:24.415953 1760410 addons.go:238] Setting addon registry-creds=true in "addons-891059"
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.415979 1760410 addons.go:69] Setting volcano=true in profile "addons-891059"
	I1018 14:09:24.415983 1760410 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-891059"
	I1018 14:09:24.415991 1760410 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.415998 1760410 addons.go:69] Setting volumesnapshots=true in profile "addons-891059"
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-891059"
	I1018 14:09:24.415959 1760410 addons.go:69] Setting inspektor-gadget=true in profile "addons-891059"
	I1018 14:09:24.416026 1760410 addons.go:69] Setting storage-provisioner=true in profile "addons-891059"
	I1018 14:09:24.416035 1760410 addons.go:238] Setting addon storage-provisioner=true in "addons-891059"
	I1018 14:09:24.415990 1760410 addons.go:238] Setting addon volcano=true in "addons-891059"
	I1018 14:09:24.416051 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416063 1760410 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.416073 1760410 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-891059"
	I1018 14:09:24.416105 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416110 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416136 1760410 addons.go:69] Setting metrics-server=true in profile "addons-891059"
	I1018 14:09:24.416172 1760410 addons.go:238] Setting addon metrics-server=true in "addons-891059"
	I1018 14:09:24.416211 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416266 1760410 addons.go:69] Setting registry=true in profile "addons-891059"
	I1018 14:09:24.416290 1760410 addons.go:238] Setting addon registry=true in "addons-891059"
	I1018 14:09:24.416318 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416454 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416462 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416496 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416504 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416536 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416546 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416565 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416634 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416702 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon volumesnapshots=true in "addons-891059"
	I1018 14:09:24.416740 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416750 1760410 addons.go:69] Setting cloud-spanner=true in profile "addons-891059"
	I1018 14:09:24.416761 1760410 addons.go:238] Setting addon cloud-spanner=true in "addons-891059"
	I1018 14:09:24.416772 1760410 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-891059"
	I1018 14:09:24.416738 1760410 addons.go:69] Setting gcp-auth=true in profile "addons-891059"
	I1018 14:09:24.416797 1760410 mustload.go:65] Loading cluster: addons-891059
	I1018 14:09:24.416803 1760410 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:24.416808 1760410 addons.go:69] Setting ingress-dns=true in profile "addons-891059"
	I1018 14:09:24.416054 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416816 1760410 addons.go:69] Setting default-storageclass=true in profile "addons-891059"
	I1018 14:09:24.416827 1760410 addons.go:69] Setting ingress=true in profile "addons-891059"
	I1018 14:09:24.416838 1760410 addons.go:238] Setting addon ingress=true in "addons-891059"
	I1018 14:09:24.416838 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-891059"
	I1018 14:09:24.416009 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-891059"
	I1018 14:09:24.416036 1760410 addons.go:238] Setting addon inspektor-gadget=true in "addons-891059"
	I1018 14:09:24.416819 1760410 addons.go:238] Setting addon ingress-dns=true in "addons-891059"
	I1018 14:09:24.417180 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417202 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417220 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417277 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417301 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417457 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417670 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417700 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417772 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417855 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417889 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417365 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418030 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418152 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.418393 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418444 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418552 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418624 1760410 out.go:179] * Verifying Kubernetes components...
	I1018 14:09:24.418907 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418967 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.422570 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422950 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.423390 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.423424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.425453 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:24.428788 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.428847 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.432739 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.432818 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.446515 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I1018 14:09:24.447603 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.448044 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I1018 14:09:24.448620 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.449130 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.449150 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450319 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.450375 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450390 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452314 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452974 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.453024 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.455440 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I1018 14:09:24.456592 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.456640 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.459616 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I1018 14:09:24.459757 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I1018 14:09:24.459794 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I1018 14:09:24.460277 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.460735 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I1018 14:09:24.460955 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463457 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463624 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463650 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.463943 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463970 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.464096 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.464766 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.464811 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.466143 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.466259 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.466646 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.467503 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.467526 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.468700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.468724 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.469056 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.469102 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.469455 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.469522 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.470074 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.470106 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.470616 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.470636 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.471024 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1018 14:09:24.471853 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.472590 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.472616 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.473010 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.473088 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473315 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473750 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I1018 14:09:24.474289 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.474360 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.474951 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.477612 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I1018 14:09:24.478762 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.479308 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.479333 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.479844 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.480258 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.480895 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.482303 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1018 14:09:24.483440 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.483700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483715 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.483863 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483872 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.484222 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.484556 1760410 addons.go:238] Setting addon default-storageclass=true in "addons-891059"
	I1018 14:09:24.484598 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.484735 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.484774 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.484961 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.485003 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.485644 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.486185 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.486221 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.488758 1760410 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-891059"
	I1018 14:09:24.488809 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489181 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.489230 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.489519 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489701 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I1018 14:09:24.494198 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1018 14:09:24.495236 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.496047 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.496066 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.496101 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I1018 14:09:24.496638 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I1018 14:09:24.496952 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.497036 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1018 14:09:24.497223 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497670 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497914 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.498318 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1018 14:09:24.498718 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498744 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499070 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499580 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.499603 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499631 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.499736 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.500137 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.500171 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500183 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500231 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500253 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500704 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.500747 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501004 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501037 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.501047 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.501852 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501890 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.505372 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.505855 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508424 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.508460 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508580 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.509093 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.509143 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.510293 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I1018 14:09:24.510851 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.511364 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.512160 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.512181 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.512251 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.513848 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:09:24.513854 1760410 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:09:24.515867 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:09:24.515885 1760410 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:09:24.515912 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.516312 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.517033 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.517295 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.517359 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519170 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519288 1760410 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:09:24.520436 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:24.520516 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:09:24.520549 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521274 1760410 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:24.521295 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:09:24.521320 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521822 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1018 14:09:24.522725 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.523307 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.523325 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.523932 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.524192 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.527503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.527590 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527618 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.527649 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1018 14:09:24.528451 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.528456 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.528513 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.528706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.528847 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.529262 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.529279 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.529677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.529956 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.530621 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I1018 14:09:24.531189 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.531587 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1018 14:09:24.532552 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.532587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.533165 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.533199 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.534272 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.534329 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.534670 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1018 14:09:24.534888 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.534927 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.534934 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.535018 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I1018 14:09:24.535456 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536405 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.536423 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.536459 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536498 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.536522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.536586 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.536638 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.536641 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536797 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536878 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.537335 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.537386 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.537814 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I1018 14:09:24.537939 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538069 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.538085 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.538431 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538510 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.538875 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.539073 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.539143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:09:24.540559 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.540650 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.540661 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:09:24.540789 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1018 14:09:24.541394 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541512 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.541542 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.541580 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.542392 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.542582 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.542593 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.541968 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541995 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.542027 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.541787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.542477 1760410 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:24.542769 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:09:24.542787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.543139 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.543258 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:09:24.543232 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.543329 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.544059 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.544119 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.544691 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.544728 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.545623 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.545670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.547151 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:09:24.547560 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.548774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.548901 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:09:24.549486 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.549513 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.549520 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1018 14:09:24.549555 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.549743 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.549944 1760410 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:09:24.549986 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:09:24.550111 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.550462 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.550548 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.551322 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:09:24.551448 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:09:24.551471 1760410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:09:24.551503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.552417 1760410 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:09:24.552611 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.552668 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.552694 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.553138 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.553466 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:09:24.553546 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:09:24.553557 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:09:24.553575 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.555796 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:09:24.556091 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.556537 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.559463 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:09:24.560143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I1018 14:09:24.560689 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:09:24.560709 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:09:24.560733 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.561360 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.562223 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.562248 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.562334 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1018 14:09:24.564735 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564798 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.564809 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.564889 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564947 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1018 14:09:24.565207 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.565656 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.565686 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.565804 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.565867 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.566012 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.566138 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.566251 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.566837 1760410 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:09:24.566841 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.566954 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.567074 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.567098 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.567382 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.567544 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.567609 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.567849 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.568018 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.568167 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.568390 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:24.568518 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:09:24.568539 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.568408 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.569303 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.569321 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.569601 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1018 14:09:24.569798 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.569904 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I1018 14:09:24.570247 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570534 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570627 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.570989 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.571754 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.571776 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.571809 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.571835 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.571888 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.571942 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.572034 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I1018 14:09:24.572101 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572114 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.572301 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.572420 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.572512 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.572532 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.572545 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:24.572552 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572560 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.573079 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.573081 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.573095 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.573102 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.573108 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.573114 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:24.573205 1760410 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:09:24.573206 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.573377 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.573909 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.574598 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.574613 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.574986 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.575284 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.575403 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.576055 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I1018 14:09:24.576282 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.576635 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.576750 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577145 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.577164 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.577387 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577425 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578449 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578485 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:09:24.578527 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.578725 1760410 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:24.578741 1760410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:09:24.578760 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.578783 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.579845 1760410 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:09:24.579890 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:09:24.579901 1760410 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:09:24.579916 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.579866 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.579966 1760410 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:09:24.581298 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.581518 1760410 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:09:24.581555 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:24.581566 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:09:24.581582 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.581701 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:09:24.581733 1760410 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:09:24.581762 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582432 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.582611 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.582663 1760410 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:24.582679 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:09:24.582698 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582744 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.583429 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.583635 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.583761 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.583832 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.584362 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I1018 14:09:24.584568 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:09:24.585155 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.585916 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.585938 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.586019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.586361 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:24.586383 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:09:24.586403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.586683 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.586913 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.587506 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587537 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.587565 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587802 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.587988 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.588388 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.588708 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.588631 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.588734 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.589129 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.589325 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.589522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.590171 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.590296 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.590321 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.590811 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591126 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591174 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591319 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.591484 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.591739 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591761 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591773 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.591922 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592011 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592200 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592253 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.592273 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.592387 1760410 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:09:24.592403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592465 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592624 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592714 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592859 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592993 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.593164 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593741 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.593774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593963 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.594146 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.594295 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.594464 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.595795 1760410 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:09:24.597040 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:24.597063 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:09:24.597082 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.600612 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.600998 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.601019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.601363 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.601584 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.601753 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.601908 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	W1018 14:09:24.742102 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.742153 1760410 retry.go:31] will retry after 155.166839ms: ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	W1018 14:09:24.905499 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.905539 1760410 retry.go:31] will retry after 290.251665ms: ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:25.195583 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:09:25.195661 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:25.238678 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:09:25.238705 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:09:25.239580 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:25.243439 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:25.244497 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:25.264037 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:25.312273 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:25.315550 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:09:25.315578 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:09:25.320939 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:25.324940 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:09:25.324962 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:09:25.327771 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:25.328434 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:25.339706 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:09:25.339737 1760410 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:09:25.369886 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:09:25.369914 1760410 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:09:25.370459 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:25.537261 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:09:25.537300 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:09:25.585100 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:09:25.585145 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:09:25.685376 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:09:25.685407 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:09:25.768517 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:09:25.768553 1760410 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:09:25.768978 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:25.769004 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:09:25.814134 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:25.814164 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:09:25.853698 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:25.853731 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:09:26.014188 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:09:26.014222 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:09:26.060465 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:09:26.060498 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:09:26.091905 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:09:26.091940 1760410 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:09:26.114081 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:26.248999 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:26.271395 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:26.432032 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:09:26.432068 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:09:26.436207 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:09:26.436242 1760410 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:09:26.558205 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:26.558233 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:09:26.717226 1760410 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:26.717268 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:09:26.717225 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:09:26.717386 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:09:26.825284 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:27.137937 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:09:27.137970 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:09:27.440610 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:27.873332 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:09:27.873382 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:09:28.056527 1760410 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.860893783s)
	I1018 14:09:28.056563 1760410 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 14:09:28.056618 1760410 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.860884504s)
	I1018 14:09:28.056693 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.817081387s)
	I1018 14:09:28.056751 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056765 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.056766 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.813291284s)
	I1018 14:09:28.056811 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056828 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057259 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057276 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057280 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057300 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057326 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057416 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057439 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057482 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057493 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057712 1760410 node_ready.go:35] waiting up to 6m0s for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.057737 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057777 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057784 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057851 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057951 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057965 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.062488 1760410 node_ready.go:49] node "addons-891059" is "Ready"
	I1018 14:09:28.062522 1760410 node_ready.go:38] duration metric: took 4.780102ms for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.062537 1760410 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:09:28.062602 1760410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:09:28.633793 1760410 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-891059" context rescaled to 1 replicas
	I1018 14:09:28.657122 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:09:28.657153 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:09:29.297640 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:09:29.297673 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:09:29.722108 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:29.722138 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:09:30.201846 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:31.747160 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.502603848s)
	I1018 14:09:31.747234 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747249 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747635 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.747662 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.747675 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747685 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.748000 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.989912 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:09:31.989960 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:31.993852 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994463 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:31.994498 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994763 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:31.995004 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:31.995210 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:31.995372 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:32.401099 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:09:32.582819 1760410 addons.go:238] Setting addon gcp-auth=true in "addons-891059"
	I1018 14:09:32.582898 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:32.583276 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.583338 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.598366 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1018 14:09:32.598979 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.599565 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.599588 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.599990 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.600582 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.600654 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.615909 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1018 14:09:32.616524 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.616999 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.617024 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.617441 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.617696 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:32.619651 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:32.619882 1760410 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:09:32.619905 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:32.623262 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.623788 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:32.623815 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.624039 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:32.624251 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:32.624440 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:32.624678 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:34.410431 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.146350667s)
	I1018 14:09:34.410505 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410520 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410535 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.098229729s)
	I1018 14:09:34.410591 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410608 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410627 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.08966013s)
	I1018 14:09:34.410671 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410688 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410780 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.082972673s)
	I1018 14:09:34.410825 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410842 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410885 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.082422149s)
	I1018 14:09:34.410912 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410921 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410996 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.040510674s)
	I1018 14:09:34.411019 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411040 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411044 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411064 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411075 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411083 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411111 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411122 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.29701229s)
	I1018 14:09:34.411143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411148 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411161 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411170 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411178 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411185 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411186 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411194 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411202 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411209 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411237 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.162212378s)
	W1018 14:09:34.411260 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411279 1760410 retry.go:31] will retry after 156.548971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411277 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411304 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411320 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411329 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411355 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411385 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.139958439s)
	I1018 14:09:34.411415 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411426 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411451 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.586135977s)
	I1018 14:09:34.411563 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411581 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411476 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413776 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413792 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413803 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413813 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413821 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413830 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413837 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413839 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413857 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413878 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413884 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413892 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413899 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413949 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413963 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413984 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413993 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414003 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414010 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.414017 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.414067 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414253 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414280 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414288 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414297 1760410 addons.go:479] Verifying addon metrics-server=true in "addons-891059"
	I1018 14:09:34.414448 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414488 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414509 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414541 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.415992 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416015 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416023 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416037 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416049 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416063 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.415991 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416140 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416177 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416185 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416194 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.416025 1760410 addons.go:479] Verifying addon ingress=true in "addons-891059"
	I1018 14:09:34.416625 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416635 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413977 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416602 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416980 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416993 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418102 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.418150 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.418163 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418177 1760410 addons.go:479] Verifying addon registry=true in "addons-891059"
	I1018 14:09:34.418831 1760410 out.go:179] * Verifying ingress addon...
	I1018 14:09:34.418835 1760410 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-891059 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:09:34.420852 1760410 out.go:179] * Verifying registry addon...
	I1018 14:09:34.422521 1760410 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:09:34.423238 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:09:34.503158 1760410 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:09:34.503192 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.503257 1760410 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:09:34.503271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.568542 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:34.621858 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.621880 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.622193 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.622248 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.622262 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:34.622394 1760410 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:09:34.659969 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.659996 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.660315 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.660316 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.660354 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.941419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.942360 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.990391 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.549686758s)
	I1018 14:09:34.990429 1760410 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.927791238s)
	I1018 14:09:34.990461 1760410 api_server.go:72] duration metric: took 10.57479054s to wait for apiserver process to appear ...
	W1018 14:09:34.990458 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990494 1760410 retry.go:31] will retry after 178.461593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990467 1760410 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:09:34.990545 1760410 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1018 14:09:35.010676 1760410 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1018 14:09:35.013686 1760410 api_server.go:141] control plane version: v1.34.1
	I1018 14:09:35.013719 1760410 api_server.go:131] duration metric: took 23.188895ms to wait for apiserver health ...
	I1018 14:09:35.013750 1760410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:09:35.060072 1760410 system_pods.go:59] 16 kube-system pods found
	I1018 14:09:35.060119 1760410 system_pods.go:61] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.060127 1760410 system_pods.go:61] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060138 1760410 system_pods.go:61] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060145 1760410 system_pods.go:61] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.060149 1760410 system_pods.go:61] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.060152 1760410 system_pods.go:61] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.060157 1760410 system_pods.go:61] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.060160 1760410 system_pods.go:61] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.060163 1760410 system_pods.go:61] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.060168 1760410 system_pods.go:61] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.060178 1760410 system_pods.go:61] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.060186 1760410 system_pods.go:61] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.060194 1760410 system_pods.go:61] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.060203 1760410 system_pods.go:61] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.060209 1760410 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.060218 1760410 system_pods.go:61] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.060229 1760410 system_pods.go:74] duration metric: took 46.469158ms to wait for pod list to return data ...
	I1018 14:09:35.060248 1760410 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:09:35.104632 1760410 default_sa.go:45] found service account: "default"
	I1018 14:09:35.104663 1760410 default_sa.go:55] duration metric: took 44.40546ms for default service account to be created ...
	I1018 14:09:35.104677 1760410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:09:35.169265 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:35.176957 1760410 system_pods.go:86] 17 kube-system pods found
	I1018 14:09:35.177007 1760410 system_pods.go:89] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.177019 1760410 system_pods.go:89] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177052 1760410 system_pods.go:89] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177068 1760410 system_pods.go:89] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.177079 1760410 system_pods.go:89] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.177087 1760410 system_pods.go:89] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.177100 1760410 system_pods.go:89] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.177106 1760410 system_pods.go:89] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.177117 1760410 system_pods.go:89] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.177125 1760410 system_pods.go:89] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.177134 1760410 system_pods.go:89] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.177145 1760410 system_pods.go:89] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.177156 1760410 system_pods.go:89] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.177171 1760410 system_pods.go:89] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.177180 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.177187 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzhfk" [f3e3fb2c-05b7-448d-bca6-3438d70868b1] Pending
	I1018 14:09:35.177198 1760410 system_pods.go:89] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.177213 1760410 system_pods.go:126] duration metric: took 72.526149ms to wait for k8s-apps to be running ...
	I1018 14:09:35.177228 1760410 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:09:35.177303 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:09:35.445832 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.461317 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:35.939729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.942319 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.445234 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.243330128s)
	I1018 14:09:36.445310 1760410 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.825399752s)
	I1018 14:09:36.445314 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445449 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.445853 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:36.445924 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.445941 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.445953 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445962 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.446272 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.446292 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.446304 1760410 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:36.447257 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:09:36.448070 1760410 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:09:36.449546 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:36.450329 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:09:36.450870 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:09:36.450894 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:09:36.458277 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.471857 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.484451 1760410 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:09:36.484481 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:36.597464 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:09:36.597499 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:09:36.732996 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.733028 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:09:36.885741 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.948270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.948391 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.960478 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.436446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.439412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.456938 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.927403 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.928102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.956527 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.404132 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.835532164s)
	W1018 14:09:38.404196 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.404224 1760410 retry.go:31] will retry after 203.009637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.433864 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.434743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.531382 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.607892 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:38.751077 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.58176118s)
	I1018 14:09:38.751130 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751161 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751178 1760410 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.573842033s)
	I1018 14:09:38.751219 1760410 system_svc.go:56] duration metric: took 3.573986856s WaitForService to wait for kubelet
	I1018 14:09:38.751238 1760410 kubeadm.go:586] duration metric: took 14.335564787s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:09:38.751274 1760410 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:09:38.751483 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.751506 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751516 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.751529 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751536 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751791 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751808 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.851019 1760410 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 14:09:38.851051 1760410 node_conditions.go:123] node cpu capacity is 2
	I1018 14:09:38.851069 1760410 node_conditions.go:105] duration metric: took 99.788234ms to run NodePressure ...
	I1018 14:09:38.851086 1760410 start.go:241] waiting for startup goroutines ...
	I1018 14:09:38.908065 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.022268979s)
	I1018 14:09:38.908143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908165 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908474 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908500 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908510 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908518 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908801 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908819 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908845 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.909928 1760410 addons.go:479] Verifying addon gcp-auth=true in "addons-891059"
	I1018 14:09:38.911794 1760410 out.go:179] * Verifying gcp-auth addon...
	I1018 14:09:38.913871 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:09:38.969859 1760410 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:09:38.969881 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:38.979126 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.979302 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.999385 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.427914 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.428338 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.431173 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.465614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.930950 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.936675 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.942841 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.965308 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.421639 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.429893 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:40.429965 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:40.457177 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.676324 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.068378617s)
	W1018 14:09:40.676402 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.676434 1760410 retry.go:31] will retry after 741.361151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.925104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.933643 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.024046 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.027134 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.418785 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:41.422791 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.437450 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.437815 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.458160 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.920933 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.931994 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.932787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.954074 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.420874 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.427884 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.432996 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.455566 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.935811 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.935897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.936364 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.948192 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529349883s)
	W1018 14:09:42.948266 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.948305 1760410 retry.go:31] will retry after 603.252738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.961547 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.421694 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.425963 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.432125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.454728 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.552443 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:43.920168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.926196 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.932562 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.954780 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.418856 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.434761 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.434815 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.485100 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.719803 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.167302475s)
	W1018 14:09:44.719876 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.719906 1760410 retry.go:31] will retry after 756.582939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.919572 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.929974 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.930622 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.954972 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.419454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.431537 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.435706 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.458249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.477327 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:45.921959 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.932928 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.933443 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.960253 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.424197 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.434428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.437611 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.457951 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.721183 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243789601s)
	W1018 14:09:46.721253 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.721284 1760410 retry.go:31] will retry after 1.22541109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.920063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.927281 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.930483 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.954658 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.422281 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.427164 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.431758 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.456565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.926249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.939833 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.940075 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.946922 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:47.966036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.420073 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.432202 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.434126 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.457282 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.920393 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.930362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.932858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.957018 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:49.201980 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255004165s)
	W1018 14:09:49.202036 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.202059 1760410 retry.go:31] will retry after 2.58897953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.420911 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:49.428333 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:49.430869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:49.457131 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.368228 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.377051 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.476106 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.476372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.479024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.479966 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.920534 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.935331 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.938361 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.961186 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.424118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.430809 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.432102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.455044 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.791362 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:51.922858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.934999 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.935987 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.958913 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.642039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.642370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.644501 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.644727 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.918752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.926588 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.930871 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.956219 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.183831 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392411457s)
	W1018 14:09:53.183895 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.183924 1760410 retry.go:31] will retry after 4.131889795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.417891 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.426911 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.428495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.454047 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.919491 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.929299 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.929427 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.958043 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.418456 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.427470 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.427657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.456313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.919925 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.927822 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.928397 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.955119 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.419222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.429271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.430752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.455541 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.918460 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.928654 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.930176 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.958687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.417289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.426666 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.426937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.456516 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.921455 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.931545 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.932200 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.957601 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.316649 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:57.422032 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.435023 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.437778 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.455440 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.921161 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.929313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.929394 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.955970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.423288 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.439731 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.440095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.786495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.919590 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.930253 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.932272 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.957912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.980642 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.663942768s)
	W1018 14:09:58.980696 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:58.980722 1760410 retry.go:31] will retry after 6.037644719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:59.421401 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.428863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.429465 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.458445 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:59.918316 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.928753 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.928856 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.955245 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.418136 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.427048 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.428214 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.457368 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.919392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.929649 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.931313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.959561 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.420084 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.426435 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.428419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.463886 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.918664 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.927921 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.927979 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.954513 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.417929 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.426037 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.428261 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.455407 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.922146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.928949 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.933375 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.956535 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.420697 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.429208 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.432897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.459039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.918554 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.926959 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.927105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.955657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.418489 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.430359 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.430521 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.456644 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.918502 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.930599 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.930923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.956737 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.018763 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:05.417681 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.428004 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.429827 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.456781 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.917569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.926923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.928124 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.957076 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.036566 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.017738492s)
	W1018 14:10:06.036634 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.036662 1760410 retry.go:31] will retry after 12.004802236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.419404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.429963 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.430297 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:06.457600 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.919260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.929676 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.929775 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.155631 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.427122 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.428776 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.457310 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.922270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.926818 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.929313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.956530 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.418802 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.429772 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.430398 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.456743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.919063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.930278 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.931169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.954708 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.424687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.432292 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.435514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.460217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.923294 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.930199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.931023 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.955035 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.419846 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.426749 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.429140 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.456969 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.953436 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.956917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.957054 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.957495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.418736 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.426419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.430935 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.455617 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.927115 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.931414 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.960289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.418970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.430735 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.433659 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.456647 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.921054 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.928629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.928668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.956226 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.420386 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.427464 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.429090 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.455488 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.918328 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.927700 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.928318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.954810 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.419754 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.425924 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.427917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.455974 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.925112 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.929625 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.933370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.957078 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.428235 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.429169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.457022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.919800 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.936816 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.937017 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.957268 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.417946 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.427385 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.431794 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.456614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.919525 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.926577 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.926658 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.954174 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.421789 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.426437 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.431339 1760410 kapi.go:107] duration metric: took 43.008095172s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:10:17.457873 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.918594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.929987 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.961960 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.042188 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:18.422928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.427500 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.456271 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.919452 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.930289 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.956388 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.361633 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.319335622s)
	W1018 14:10:19.361689 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.361728 1760410 retry.go:31] will retry after 15.164014777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.422771 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.438239 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.456621 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.921757 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.928298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.420260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.427508 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.458936 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.927378 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.955188 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.420104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.426947 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.524486 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.918327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.927194 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.955524 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.423531 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.426633 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.454711 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.921113 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.928945 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.954404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.420637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.430677 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.459231 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.919372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.928323 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.958731 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.420036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.427298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.456668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.919003 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.927657 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.957888 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.421338 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.427501 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.455612 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.918199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.927869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.958203 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.419024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.428832 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.456514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.918247 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.928171 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.956494 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.418446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.430922 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.460225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.934863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.935267 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.956304 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.418276 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.426282 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.455657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.921058 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.928216 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.957699 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.423964 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.429784 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:29.459912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.919968 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.926486 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.021594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.431798 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.435432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.456454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.930069 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.943105 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.955957 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.429432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.438231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.455431 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.921095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.931309 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.956251 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.420152 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.428240 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.458714 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.922542 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.930043 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.957260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.419500 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.428933 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.455363 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.923146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.929585 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.958835 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.420137 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.426760 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.457114 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.526904 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:34.919159 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.928439 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.955153 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.418928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.426233 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.458485 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.764870 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237905947s)
	W1018 14:10:35.764934 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.764957 1760410 retry.go:31] will retry after 14.798475806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.919540 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.928534 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.955008 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.450125 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.453729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.536855 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.917765 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.925569 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.955287 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.419773 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.427166 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:37.456318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.919552 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.927629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.025256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.424973 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.428550 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.453898 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.919099 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.926293 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.955682 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.418953 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.430007 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.459225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.920652 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.929231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.954710 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.421937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.429412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.480118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.920635 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.929091 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.956998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.426085 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.427988 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.459105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.918797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.926487 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.955036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.420125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.428890 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.454689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.919029 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.927753 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.954419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.422025 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.426830 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.457376 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.917234 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.930520 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.956616 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.419241 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.428799 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.456787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.918484 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.928332 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.961125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.421688 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.427032 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.457168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.919022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.927029 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.959091 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.418637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.429220 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.455413 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.919149 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.926519 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.956560 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.419157 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.427737 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.455569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.918673 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.926052 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.420322 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.430745 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.456105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.922457 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.928328 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.956428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.434222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.437527 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.461279 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.920966 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.929362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.956797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.418327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.430238 1760410 kapi.go:107] duration metric: took 1m16.007712358s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:10:50.456335 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.564457 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:50.917217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.958103 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.421689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.455392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.920286 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.942284 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.377769111s)
	W1018 14:10:51.942338 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:51.942424 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942439 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.942850 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.942873 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:10:51.942875 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:10:51.942891 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942902 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.943167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.943186 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:10:51.943290 1760410 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:10:51.956095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.418797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.455097 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.918142 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.955842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.417788 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.454466 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.928372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.956892 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.421372 1760410 kapi.go:107] duration metric: took 1m15.507497357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:10:54.422977 1760410 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-891059 cluster.
	I1018 14:10:54.424170 1760410 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:10:54.425362 1760410 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:10:54.455256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.954565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.455801 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.954326 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.455155 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.954954 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.455480 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.957998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:58.454831 1760410 kapi.go:107] duration metric: took 1m22.004497442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:10:58.456573 1760410 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:10:58.457854 1760410 addons.go:514] duration metric: took 1m34.042106278s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:10:58.457949 1760410 start.go:246] waiting for cluster config update ...
	I1018 14:10:58.457975 1760410 start.go:255] writing updated cluster config ...
	I1018 14:10:58.458280 1760410 ssh_runner.go:195] Run: rm -f paused
	I1018 14:10:58.466229 1760410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:10:58.470432 1760410 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.477134 1760410 pod_ready.go:94] pod "coredns-66bc5c9577-9t6mk" is "Ready"
	I1018 14:10:58.477163 1760410 pod_ready.go:86] duration metric: took 6.703976ms for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.479169 1760410 pod_ready.go:83] waiting for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.489364 1760410 pod_ready.go:94] pod "etcd-addons-891059" is "Ready"
	I1018 14:10:58.489404 1760410 pod_ready.go:86] duration metric: took 10.207192ms for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.491622 1760410 pod_ready.go:83] waiting for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.497381 1760410 pod_ready.go:94] pod "kube-apiserver-addons-891059" is "Ready"
	I1018 14:10:58.497406 1760410 pod_ready.go:86] duration metric: took 5.754148ms for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.499963 1760410 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.870880 1760410 pod_ready.go:94] pod "kube-controller-manager-addons-891059" is "Ready"
	I1018 14:10:58.870932 1760410 pod_ready.go:86] duration metric: took 370.945889ms for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.070811 1760410 pod_ready.go:83] waiting for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.471322 1760410 pod_ready.go:94] pod "kube-proxy-ckpzl" is "Ready"
	I1018 14:10:59.471383 1760410 pod_ready.go:86] duration metric: took 400.536721ms for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.672128 1760410 pod_ready.go:83] waiting for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071253 1760410 pod_ready.go:94] pod "kube-scheduler-addons-891059" is "Ready"
	I1018 14:11:00.071288 1760410 pod_ready.go:86] duration metric: took 399.125586ms for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071306 1760410 pod_ready.go:40] duration metric: took 1.60503304s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:11:00.118648 1760410 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:11:00.120494 1760410 out.go:179] * Done! kubectl is now configured to use "addons-891059" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.516474445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797178516443972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=957a36b2-bb37-4de4-aa5a-423735088037 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.517161703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d3de1e6-f5d4-4a79-9963-b44368a865d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.517327667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d3de1e6-f5d4-4a79-9963-b44368a865d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.518172849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d3de1e6-f5d4-4a79-9963-b44368a865d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.567780684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f96f548f-6062-4a09-839c-18e80a7e22ae name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.567855434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f96f548f-6062-4a09-839c-18e80a7e22ae name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.569211347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39ea2b65-3f1e-49f0-9c13-92a064bf07e3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.572407028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797178572372023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39ea2b65-3f1e-49f0-9c13-92a064bf07e3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.573413982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=112b18e2-bb5c-4929-9c36-44495feb6e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.573500956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=112b18e2-bb5c-4929-9c36-44495feb6e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.573936965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=112b18e2-bb5c-4929-9c36-44495feb6e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.613367365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1905ed79-a7be-4a6c-8137-313bbebae6ab name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.613488850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1905ed79-a7be-4a6c-8137-313bbebae6ab name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.614982526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f3381c8-6f7d-4142-97b2-c07f2c179e5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.617290959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797178617256801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f3381c8-6f7d-4142-97b2-c07f2c179e5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.617929900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ae607d6-c52f-495a-a128-d6e169d8d652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.617995350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ae607d6-c52f-495a-a128-d6e169d8d652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.618382239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ae607d6-c52f-495a-a128-d6e169d8d652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.657493378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd45c045-d068-4504-a41a-c7eb8c23038b name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.657767387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd45c045-d068-4504-a41a-c7eb8c23038b name=/runtime.v1.RuntimeService/Version
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.658840766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a985daf-df31-4f71-ac7e-676cbe338102 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.660142564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797178660114207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a985daf-df31-4f71-ac7e-676cbe338102 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.661407183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb843bbb-7e6c-4367-af07-3defc60c4149 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.661488054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb843bbb-7e6c-4367-af07-3defc60c4149 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:19:38 addons-891059 crio[822]: time="2025-10-18 14:19:38.661860471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb843bbb-7e6c-4367-af07-3defc60c4149 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4019b2f5a82e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   871fa03a65061       busybox
	90ce2976bee33       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             8 minutes ago       Running             controller                0                   2f9eb14649244       ingress-nginx-controller-675c5ddd98-bphwz
	a6267021fe474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              patch                     0                   7483a2b2bce44       ingress-nginx-admission-patch-lz2l5
	405281ec9edfa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              create                    0                   784fb9851d0e3       ingress-nginx-admission-create-nbrm2
	751b2df6a5bf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            9 minutes ago       Running             gadget                    0                   e7adc46dd97a6       gadget-bz8k2
	3faa5d947b9ed       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   04626452678ec       kube-ingress-dns-minikube
	da75007bac0f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   bf130a85fe68d       storage-provisioner
	90350cf8ae050       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     10 minutes ago      Running             amd-gpu-device-plugin     0                   b439dd6e51abd       amd-gpu-device-plugin-c5cbb
	5b099b5b37807       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             10 minutes ago      Running             coredns                   0                   ba30da275bea1       coredns-66bc5c9577-9t6mk
	97e1670c81585       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             10 minutes ago      Running             kube-proxy                0                   8fb6c60415fda       kube-proxy-ckpzl
	873a633e0ebfd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             10 minutes ago      Running             kube-controller-manager   0                   4b35987ede042       kube-controller-manager-addons-891059
	4f010fdc156cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             10 minutes ago      Running             etcd                      0                   bfa6fdc1baf4d       etcd-addons-891059
	50cc3d2477595       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             10 minutes ago      Running             kube-scheduler            0                   b783fc0f686a0       kube-scheduler-addons-891059
	550e8ca214589       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             10 minutes ago      Running             kube-apiserver            0                   c8fbc229d4f5f       kube-apiserver-addons-891059
	
	
	==> coredns [5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925] <==
	[INFO] 10.244.0.8:38553 - 35504 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000072442s
	[INFO] 10.244.0.8:41254 - 10457 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126469s
	[INFO] 10.244.0.8:41254 - 10148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000351753s
	[INFO] 10.244.0.8:58812 - 14712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165201s
	[INFO] 10.244.0.8:58812 - 14408 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000227737s
	[INFO] 10.244.0.8:46072 - 17563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089989s
	[INFO] 10.244.0.8:46072 - 17331 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000357865s
	[INFO] 10.244.0.8:44214 - 24523 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103993s
	[INFO] 10.244.0.8:44214 - 24308 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319225s
	[INFO] 10.244.0.23:53101 - 38230 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000789741s
	[INFO] 10.244.0.23:39743 - 4637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014608s
	[INFO] 10.244.0.23:34680 - 45484 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000257617s
	[INFO] 10.244.0.23:57667 - 2834 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156321s
	[INFO] 10.244.0.23:49060 - 9734 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000228026s
	[INFO] 10.244.0.23:49380 - 40146 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011544s
	[INFO] 10.244.0.23:59610 - 60837 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001192659s
	[INFO] 10.244.0.23:43936 - 55741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001950004s
	[INFO] 10.244.0.28:45423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000594412s
	[INFO] 10.244.0.28:35326 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000279094s
	[INFO] 10.244.0.28:34121 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115216s
	[INFO] 10.244.0.28:43026 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000225891s
	[INFO] 10.244.0.28:58520 - 6 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000121233s
	[INFO] 10.244.0.28:39709 - 7 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000126579s
	[INFO] 10.244.0.28:46571 - 8 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000295561s
	[INFO] 10.244.0.28:34480 - 9 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104287s
	
	
	==> describe nodes <==
	Name:               addons-891059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-891059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-891059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-891059
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:09:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-891059
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:19:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-891059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 372d92314fa4448095fc5052e6676096
	  System UUID:                372d9231-4fa4-4480-95fc-5052e6676096
	  Boot ID:                    7e38709f-8590-4225-8b4d-3bbac20f6c51
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  gadget                      gadget-bz8k2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bphwz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-c5cbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-9t6mk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-891059                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-891059                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-891059        200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-ckpzl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-891059                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-891059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-891059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-891059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-891059 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-891059 event: Registered Node addons-891059 in Controller
	
	
	==> dmesg <==
	[  +0.620971] kauditd_printk_skb: 414 callbacks suppressed
	[ +15.304937] kauditd_printk_skb: 49 callbacks suppressed
	[Oct18 14:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.485780] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.577564] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.762881] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.526985] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.667244] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.038951] kauditd_printk_skb: 160 callbacks suppressed
	[  +5.632898] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.124721] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:11] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.104883] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000298] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.819366] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.221421] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.837047] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.423844] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:13] kauditd_printk_skb: 25 callbacks suppressed
	[Oct18 14:14] kauditd_printk_skb: 17 callbacks suppressed
	[ +31.538641] kauditd_printk_skb: 22 callbacks suppressed
	[Oct18 14:17] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552] <==
	{"level":"warn","ts":"2025-10-18T14:10:27.790385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.236345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.790510Z","caller":"traceutil/trace.go:172","msg":"trace[732980754] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:980; }","duration":"108.373321ms","start":"2025-10-18T14:10:27.682130Z","end":"2025-10-18T14:10:27.790503Z","steps":["trace[732980754] 'agreement among raft nodes before linearized reading'  (duration: 108.1351ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:31.360128Z","caller":"traceutil/trace.go:172","msg":"trace[1845619058] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"140.456007ms","start":"2025-10-18T14:10:31.219657Z","end":"2025-10-18T14:10:31.360113Z","steps":["trace[1845619058] 'process raft request'  (duration: 140.331758ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:46.208681Z","caller":"traceutil/trace.go:172","msg":"trace[1766959808] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"186.674963ms","start":"2025-10-18T14:10:46.021984Z","end":"2025-10-18T14:10:46.208659Z","steps":["trace[1766959808] 'process raft request'  (duration: 186.50291ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:11:02.952579Z","caller":"traceutil/trace.go:172","msg":"trace[1731516554] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"113.28639ms","start":"2025-10-18T14:11:02.839276Z","end":"2025-10-18T14:11:02.952562Z","steps":["trace[1731516554] 'read index received'  (duration: 113.240159ms)","trace[1731516554] 'applied index is now lower than readState.Index'  (duration: 45.276µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:11:02.953674Z","caller":"traceutil/trace.go:172","msg":"trace[374499777] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"131.03911ms","start":"2025-10-18T14:11:02.822625Z","end":"2025-10-18T14:11:02.953664Z","steps":["trace[374499777] 'process raft request'  (duration: 130.864849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:11:02.953956Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.682576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:11:02.958891Z","caller":"traceutil/trace.go:172","msg":"trace[2098939205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1198; }","duration":"119.626167ms","start":"2025-10-18T14:11:02.839251Z","end":"2025-10-18T14:11:02.958878Z","steps":["trace[2098939205] 'agreement among raft nodes before linearized reading'  (duration: 114.665108ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.804829Z","caller":"traceutil/trace.go:172","msg":"trace[38135400] linearizableReadLoop","detail":"{readStateIndex:1845; appliedIndex:1845; }","duration":"254.786987ms","start":"2025-10-18T14:14:17.550008Z","end":"2025-10-18T14:14:17.804795Z","steps":["trace[38135400] 'read index received'  (duration: 254.774829ms)","trace[38135400] 'applied index is now lower than readState.Index'  (duration: 11.099µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:17.805068Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.018833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805091Z","caller":"traceutil/trace.go:172","msg":"trace[1453244013] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1761; }","duration":"255.081798ms","start":"2025-10-18T14:14:17.550004Z","end":"2025-10-18T14:14:17.805086Z","steps":["trace[1453244013] 'agreement among raft nodes before linearized reading'  (duration: 254.990525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:17.805508Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.4057ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805595Z","caller":"traceutil/trace.go:172","msg":"trace[926038607] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1762; }","duration":"133.500196ms","start":"2025-10-18T14:14:17.672087Z","end":"2025-10-18T14:14:17.805587Z","steps":["trace[926038607] 'agreement among raft nodes before linearized reading'  (duration: 133.363964ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.805922Z","caller":"traceutil/trace.go:172","msg":"trace[451226295] transaction","detail":"{read_only:false; response_revision:1762; number_of_response:1; }","duration":"260.563702ms","start":"2025-10-18T14:14:17.545349Z","end":"2025-10-18T14:14:17.805913Z","steps":["trace[451226295] 'process raft request'  (duration: 259.940194ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:23.347090Z","caller":"traceutil/trace.go:172","msg":"trace[355090838] linearizableReadLoop","detail":"{readStateIndex:1864; appliedIndex:1864; }","duration":"301.568388ms","start":"2025-10-18T14:14:23.045504Z","end":"2025-10-18T14:14:23.347073Z","steps":["trace[355090838] 'read index received'  (duration: 301.562884ms)","trace[355090838] 'applied index is now lower than readState.Index'  (duration: 4.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:23.347216Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.743884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347238Z","caller":"traceutil/trace.go:172","msg":"trace[954386242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1779; }","duration":"301.780363ms","start":"2025-10-18T14:14:23.045451Z","end":"2025-10-18T14:14:23.347231Z","steps":["trace[954386242] 'agreement among raft nodes before linearized reading'  (duration: 301.721286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347296Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.045431Z","time spent":"301.853987ms","remote":"127.0.0.1:53840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-18T14:14:23.347302Z","caller":"traceutil/trace.go:172","msg":"trace[648344144] transaction","detail":"{read_only:false; response_revision:1780; number_of_response:1; }","duration":"307.588862ms","start":"2025-10-18T14:14:23.039701Z","end":"2025-10-18T14:14:23.347290Z","steps":["trace[648344144] 'process raft request'  (duration: 307.402517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347441Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.039679Z","time spent":"307.656367ms","remote":"127.0.0.1:53970","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" mod_revision:1752 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" > >"}
	{"level":"warn","ts":"2025-10-18T14:14:23.347489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.844351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347507Z","caller":"traceutil/trace.go:172","msg":"trace[2122422757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1780; }","duration":"166.862778ms","start":"2025-10-18T14:14:23.180639Z","end":"2025-10-18T14:14:23.347502Z","steps":["trace[2122422757] 'agreement among raft nodes before linearized reading'  (duration: 166.829225ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:19:14.932031Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1757}
	{"level":"info","ts":"2025-10-18T14:19:14.998788Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1757,"took":"65.991512ms","hash":1411058327,"current-db-size-bytes":6139904,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":4022272,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2025-10-18T14:19:14.998880Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1411058327,"revision":1757,"compact-revision":-1}
	
	
	==> kernel <==
	 14:19:39 up 10 min,  0 users,  load average: 0.44, 1.05, 0.85
	Linux addons-891059 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab] <==
	E1018 14:10:41.349441       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:41.403792       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:11:09.006479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51796: use of closed network connection
	E1018 14:11:09.215206       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51814: use of closed network connection
	I1018 14:11:36.964050       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:11:37.174177       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.128.177"}
	I1018 14:11:42.373806       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 14:12:52.429043       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.125.191"}
	I1018 14:17:30.457833       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 14:17:30.458196       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 14:17:30.493221       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 14:17:30.493260       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 14:17:30.521645       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 14:17:30.521730       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 14:17:30.537276       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 14:17:30.537308       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 14:17:30.570217       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 14:17:30.570306       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 14:17:31.521973       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 14:17:31.571132       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1018 14:17:31.584527       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1018 14:19:16.454331       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad] <==
	E1018 14:17:40.389907       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:17:40.390872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:17:46.647135       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:17:46.648598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:17:47.911229       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:17:47.912502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:17:52.677952       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:17:52.678953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1018 14:17:53.633914       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:17:53.634372       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:17:53.692754       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:17:53.693004       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:18:05.609775       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:18:05.610852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:18:09.051866       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:18:09.053721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:18:17.055268       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:18:17.056654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1018 14:18:23.491992       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^55dec727-ac2c-11f0-a229-a675541a7df1" nodeName="addons-891059" scheduledPods=["default/task-pv-pod"]
	E1018 14:18:50.226312       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:18:50.227726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:18:55.003197       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:18:55.004294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 14:19:03.603768       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 14:19:03.605328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881] <==
	I1018 14:09:29.078784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:09:29.179875       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:09:29.180064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1018 14:09:29.180168       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:09:29.435752       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:09:29.435855       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:09:29.435886       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:09:29.458405       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:09:29.459486       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:09:29.459499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:09:29.471972       1 config.go:200] "Starting service config controller"
	I1018 14:09:29.472688       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:09:29.472718       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:09:29.472724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:09:29.472739       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:09:29.472745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:09:29.474046       1 config.go:309] "Starting node config controller"
	I1018 14:09:29.474055       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:09:29.474060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:09:29.573160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:09:29.573457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:09:29.573493       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6] <==
	E1018 14:09:16.517030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:16.517067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:09:16.517111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:16.517151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:09:16.517190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:16.517227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:16.517305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:16.517334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:16.517377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:16.517437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:16.524951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:09:17.315107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:09:17.350735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:09:17.351152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:17.351207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:17.375382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:17.392110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:17.451119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:17.490015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:17.582674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:09:17.653362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:09:17.692474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:17.761718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:17.762010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 14:09:18.995741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:18:47 addons-891059 kubelet[1503]: E1018 14:18:47.472945    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:18:50 addons-891059 kubelet[1503]: E1018 14:18:50.084076    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797130083319284  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:18:50 addons-891059 kubelet[1503]: E1018 14:18:50.084105    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797130083319284  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:18:51 addons-891059 kubelet[1503]: E1018 14:18:51.919899    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:18:51 addons-891059 kubelet[1503]: E1018 14:18:51.919990    1503 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:18:51 addons-891059 kubelet[1503]: E1018 14:18:51.920134    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(3922f28b-1c3b-4a38-b461-c5f57823b438): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:18:51 addons-891059 kubelet[1503]: E1018 14:18:51.920167    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:18:56 addons-891059 kubelet[1503]: I1018 14:18:56.473042    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c5cbb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:18:57 addons-891059 kubelet[1503]: E1018 14:18:57.479748    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:19:00 addons-891059 kubelet[1503]: E1018 14:19:00.087023    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797140086288068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:00 addons-891059 kubelet[1503]: E1018 14:19:00.087175    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797140086288068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:01 addons-891059 kubelet[1503]: E1018 14:19:01.474798    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:19:07 addons-891059 kubelet[1503]: E1018 14:19:07.482100    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:19:10 addons-891059 kubelet[1503]: E1018 14:19:10.090611    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797150089744783  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:10 addons-891059 kubelet[1503]: E1018 14:19:10.090754    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797150089744783  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:10 addons-891059 kubelet[1503]: E1018 14:19:10.475225    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:19:16 addons-891059 kubelet[1503]: E1018 14:19:16.473829    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:19:20 addons-891059 kubelet[1503]: E1018 14:19:20.093907    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797160093437605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:20 addons-891059 kubelet[1503]: E1018 14:19:20.093970    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797160093437605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:20 addons-891059 kubelet[1503]: W1018 14:19:20.481167    1503 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 18 14:19:21 addons-891059 kubelet[1503]: E1018 14:19:21.477760    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:19:30 addons-891059 kubelet[1503]: E1018 14:19:30.096960    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797170096050221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:30 addons-891059 kubelet[1503]: E1018 14:19:30.097047    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797170096050221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:19:30 addons-891059 kubelet[1503]: E1018 14:19:30.473464    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:19:36 addons-891059 kubelet[1503]: E1018 14:19:36.477340    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	
	
	==> storage-provisioner [da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504] <==
	W1018 14:19:13.582986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:15.586651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:15.594737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:17.598754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:17.605388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:19.609125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:19.617669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:21.622046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:21.627734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:23.631283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:23.640526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:25.644786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:25.650504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:27.654603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:27.661042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:29.664690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:29.670693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:31.676439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:31.682143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:33.685389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:33.691755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:35.696313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:35.705661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:37.710322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:19:37.720394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
helpers_test.go:269: (dbg) Run:  kubectl --context addons-891059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1 (100.269339ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrm2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrm2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m3s                 default-scheduler  Successfully assigned default/nginx to addons-891059
	  Normal   Pulling    106s (x4 over 8m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     49s (x4 over 6m13s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s (x4 over 6m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x8 over 6m12s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x8 over 6m12s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48qc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-48qc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m13s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-891059
	  Normal   Pulling    2m15s (x4 over 8m13s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     79s (x4 over 6m44s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x4 over 6m44s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x10 over 6m44s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x10 over 6m44s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:23 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2cp2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2cp2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m17s                 default-scheduler  Successfully assigned default/test-local-path to addons-891059
	  Warning  Failed     7m15s                 kubelet            Failed to pull image "busybox:stable": initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     110s (x4 over 7m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     110s (x3 over 5m28s)  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    30s (x11 over 7m14s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     30s (x11 over 7m14s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    19s (x5 over 8m16s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nbrm2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lz2l5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable ingress-dns --alsologtostderr -v=1: (1.353425694s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable ingress --alsologtostderr -v=1: (7.849522431s)
--- FAIL: TestAddons/parallel/Ingress (492.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (380.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 11.005267ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-891059 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-891059 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [95d229e3-8666-49b8-b2d2-2e34ed8f3aab] Pending
helpers_test.go:352: "task-pv-pod" [95d229e3-8666-49b8-b2d2-2e34ed8f3aab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-18 14:17:27.359763405 +0000 UTC m=+541.095120463
addons_test.go:567: (dbg) Run:  kubectl --context addons-891059 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-891059 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-891059/192.168.39.100
Start Time:       Sat, 18 Oct 2025 14:11:27 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48qc7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-48qc7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-891059
Warning  Failed     53s (x3 over 4m31s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     53s (x3 over 4m31s)  kubelet            Error: ErrImagePull
Normal   BackOff    14s (x5 over 4m31s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     14s (x5 over 4m31s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2s (x4 over 6m)      kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-891059 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-891059 logs task-pv-pod -n default: exit status 1 (105.677361ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-891059 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-891059 -n addons-891059
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 logs -n 25: (1.478361556s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ --download-only -p binary-mirror-305392 --alsologtostderr --binary-mirror http://127.0.0.1:39643 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ -p binary-mirror-305392                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ addons  │ enable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ start   │ -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ ip      │ addons-891059 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ enable headlamp -p addons-891059 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │ 18 Oct 25 14:14 UTC │
	│ addons  │ addons-891059 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │ 18 Oct 25 14:15 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:38.383524 1760410 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:38.383797 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383806 1760410 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:38.383810 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383984 1760410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:08:38.384564 1760410 out.go:368] Setting JSON to false
	I1018 14:08:38.385550 1760410 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21066,"bootTime":1760775452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:38.385650 1760410 start.go:141] virtualization: kvm guest
	I1018 14:08:38.387370 1760410 out.go:179] * [addons-891059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:08:38.388598 1760410 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:08:38.388649 1760410 notify.go:220] Checking for updates...
	I1018 14:08:38.390750 1760410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:38.391832 1760410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:38.392857 1760410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:38.393954 1760410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:08:38.395387 1760410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:08:38.397030 1760410 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:08:38.428089 1760410 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 14:08:38.429204 1760410 start.go:305] selected driver: kvm2
	I1018 14:08:38.429233 1760410 start.go:925] validating driver "kvm2" against <nil>
	I1018 14:08:38.429248 1760410 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:08:38.429988 1760410 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.430081 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.444435 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.444496 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.459956 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.460007 1760410 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:08:38.460292 1760410 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:08:38.460324 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:08:38.460395 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:08:38.460407 1760410 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 14:08:38.460458 1760410 start.go:349] cluster config:
	{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 14:08:38.460561 1760410 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.462275 1760410 out.go:179] * Starting "addons-891059" primary control-plane node in "addons-891059" cluster
	I1018 14:08:38.463616 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:08:38.463663 1760410 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:08:38.463679 1760410 cache.go:58] Caching tarball of preloaded images
	I1018 14:08:38.463782 1760410 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:08:38.463797 1760410 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:08:38.464313 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:38.464364 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json: {Name:mk7320464dda7a1239a5641208a2baa2eb0aeb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:08:38.464529 1760410 start.go:360] acquireMachinesLock for addons-891059: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 14:08:38.464580 1760410 start.go:364] duration metric: took 35.666µs to acquireMachinesLock for "addons-891059"
	I1018 14:08:38.464596 1760410 start.go:93] Provisioning new machine with config: &{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:08:38.464647 1760410 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 14:08:38.467259 1760410 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 14:08:38.467474 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:08:38.467524 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:08:38.481384 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1018 14:08:38.481876 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:08:38.482458 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:08:38.482488 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:08:38.482906 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:08:38.483171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:38.483408 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:38.483601 1760410 start.go:159] libmachine.API.Create for "addons-891059" (driver="kvm2")
	I1018 14:08:38.483638 1760410 client.go:168] LocalClient.Create starting
	I1018 14:08:38.483679 1760410 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem
	I1018 14:08:38.745193 1760410 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem
	I1018 14:08:39.239522 1760410 main.go:141] libmachine: Running pre-create checks...
	I1018 14:08:39.239552 1760410 main.go:141] libmachine: (addons-891059) Calling .PreCreateCheck
	I1018 14:08:39.240096 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:39.240581 1760410 main.go:141] libmachine: Creating machine...
	I1018 14:08:39.240598 1760410 main.go:141] libmachine: (addons-891059) Calling .Create
	I1018 14:08:39.240735 1760410 main.go:141] libmachine: (addons-891059) creating domain...
	I1018 14:08:39.240756 1760410 main.go:141] libmachine: (addons-891059) creating network...
	I1018 14:08:39.242180 1760410 main.go:141] libmachine: (addons-891059) DBG | found existing default network
	I1018 14:08:39.242394 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.242421 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>default</name>
	I1018 14:08:39.242432 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 14:08:39.242439 1760410 main.go:141] libmachine: (addons-891059) DBG |   <forward mode='nat'>
	I1018 14:08:39.242474 1760410 main.go:141] libmachine: (addons-891059) DBG |     <nat>
	I1018 14:08:39.242495 1760410 main.go:141] libmachine: (addons-891059) DBG |       <port start='1024' end='65535'/>
	I1018 14:08:39.242573 1760410 main.go:141] libmachine: (addons-891059) DBG |     </nat>
	I1018 14:08:39.242596 1760410 main.go:141] libmachine: (addons-891059) DBG |   </forward>
	I1018 14:08:39.242607 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 14:08:39.242619 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 14:08:39.242634 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 14:08:39.242645 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.242658 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 14:08:39.242666 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.242673 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.242680 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.242694 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243130 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.242976 1760437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123570}
	I1018 14:08:39.243178 1760410 main.go:141] libmachine: (addons-891059) DBG | defining private network:
	I1018 14:08:39.243193 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243204 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.243216 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.243222 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.243227 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.243234 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.243239 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.243245 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.243249 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.243263 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.243270 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.248946 1760410 main.go:141] libmachine: (addons-891059) DBG | creating private network mk-addons-891059 192.168.39.0/24...
	I1018 14:08:39.319941 1760410 main.go:141] libmachine: (addons-891059) DBG | private network mk-addons-891059 192.168.39.0/24 created
	I1018 14:08:39.320210 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.320231 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.320247 1760410 main.go:141] libmachine: (addons-891059) setting up store path in /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.320262 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>3e7dc5ca-8c6a-4f5a-8f08-752a5d85d27d</uuid>
	I1018 14:08:39.320883 1760410 main.go:141] libmachine: (addons-891059) building disk image from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 14:08:39.320919 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 14:08:39.320937 1760410 main.go:141] libmachine: (addons-891059) Downloading /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 14:08:39.320964 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:80:09:dc'/>
	I1018 14:08:39.320974 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.320985 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.320997 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.321006 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.321013 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.321038 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.321045 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.321061 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.321072 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.320218 1760437 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.610846 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.610682 1760437 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa...
	I1018 14:08:39.691572 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691412 1760437 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk...
	I1018 14:08:39.691603 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing magic tar header
	I1018 14:08:39.691616 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing SSH key tar header
	I1018 14:08:39.691625 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691531 1760437 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.691639 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059
	I1018 14:08:39.691766 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 (perms=drwx------)
	I1018 14:08:39.691804 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines (perms=drwxr-xr-x)
	I1018 14:08:39.691812 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines
	I1018 14:08:39.691822 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.691828 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824
	I1018 14:08:39.691835 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 14:08:39.691839 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins
	I1018 14:08:39.691848 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home
	I1018 14:08:39.691853 1760410 main.go:141] libmachine: (addons-891059) DBG | skipping /home - not owner
	I1018 14:08:39.691897 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube (perms=drwxr-xr-x)
	I1018 14:08:39.691923 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824 (perms=drwxrwxr-x)
	I1018 14:08:39.691940 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 14:08:39.691998 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 14:08:39.692026 1760410 main.go:141] libmachine: (addons-891059) defining domain...
	I1018 14:08:39.693006 1760410 main.go:141] libmachine: (addons-891059) defining domain using XML: 
	I1018 14:08:39.693019 1760410 main.go:141] libmachine: (addons-891059) <domain type='kvm'>
	I1018 14:08:39.693025 1760410 main.go:141] libmachine: (addons-891059)   <name>addons-891059</name>
	I1018 14:08:39.693030 1760410 main.go:141] libmachine: (addons-891059)   <memory unit='MiB'>4096</memory>
	I1018 14:08:39.693036 1760410 main.go:141] libmachine: (addons-891059)   <vcpu>2</vcpu>
	I1018 14:08:39.693040 1760410 main.go:141] libmachine: (addons-891059)   <features>
	I1018 14:08:39.693046 1760410 main.go:141] libmachine: (addons-891059)     <acpi/>
	I1018 14:08:39.693053 1760410 main.go:141] libmachine: (addons-891059)     <apic/>
	I1018 14:08:39.693058 1760410 main.go:141] libmachine: (addons-891059)     <pae/>
	I1018 14:08:39.693064 1760410 main.go:141] libmachine: (addons-891059)   </features>
	I1018 14:08:39.693069 1760410 main.go:141] libmachine: (addons-891059)   <cpu mode='host-passthrough'>
	I1018 14:08:39.693074 1760410 main.go:141] libmachine: (addons-891059)   </cpu>
	I1018 14:08:39.693078 1760410 main.go:141] libmachine: (addons-891059)   <os>
	I1018 14:08:39.693085 1760410 main.go:141] libmachine: (addons-891059)     <type>hvm</type>
	I1018 14:08:39.693090 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='cdrom'/>
	I1018 14:08:39.693095 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='hd'/>
	I1018 14:08:39.693100 1760410 main.go:141] libmachine: (addons-891059)     <bootmenu enable='no'/>
	I1018 14:08:39.693104 1760410 main.go:141] libmachine: (addons-891059)   </os>
	I1018 14:08:39.693134 1760410 main.go:141] libmachine: (addons-891059)   <devices>
	I1018 14:08:39.693159 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='cdrom'>
	I1018 14:08:39.693176 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.693184 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.693194 1760410 main.go:141] libmachine: (addons-891059)       <readonly/>
	I1018 14:08:39.693202 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693215 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='disk'>
	I1018 14:08:39.693225 1760410 main.go:141] libmachine: (addons-891059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 14:08:39.693242 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.693252 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.693259 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693271 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693281 1760410 main.go:141] libmachine: (addons-891059)       <source network='mk-addons-891059'/>
	I1018 14:08:39.693293 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693303 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693324 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693354 1760410 main.go:141] libmachine: (addons-891059)       <source network='default'/>
	I1018 14:08:39.693363 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693367 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693373 1760410 main.go:141] libmachine: (addons-891059)     <serial type='pty'>
	I1018 14:08:39.693396 1760410 main.go:141] libmachine: (addons-891059)       <target port='0'/>
	I1018 14:08:39.693404 1760410 main.go:141] libmachine: (addons-891059)     </serial>
	I1018 14:08:39.693408 1760410 main.go:141] libmachine: (addons-891059)     <console type='pty'>
	I1018 14:08:39.693416 1760410 main.go:141] libmachine: (addons-891059)       <target type='serial' port='0'/>
	I1018 14:08:39.693426 1760410 main.go:141] libmachine: (addons-891059)     </console>
	I1018 14:08:39.693446 1760410 main.go:141] libmachine: (addons-891059)     <rng model='virtio'>
	I1018 14:08:39.693467 1760410 main.go:141] libmachine: (addons-891059)       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.693482 1760410 main.go:141] libmachine: (addons-891059)     </rng>
	I1018 14:08:39.693492 1760410 main.go:141] libmachine: (addons-891059)   </devices>
	I1018 14:08:39.693501 1760410 main.go:141] libmachine: (addons-891059) </domain>
	I1018 14:08:39.693506 1760410 main.go:141] libmachine: (addons-891059) 
	I1018 14:08:39.706650 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:f4:cf:b8 in network default
	I1018 14:08:39.707254 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:39.707274 1760410 main.go:141] libmachine: (addons-891059) starting domain...
	I1018 14:08:39.707286 1760410 main.go:141] libmachine: (addons-891059) ensuring networks are active...
	I1018 14:08:39.707989 1760410 main.go:141] libmachine: (addons-891059) Ensuring network default is active
	I1018 14:08:39.708292 1760410 main.go:141] libmachine: (addons-891059) Ensuring network mk-addons-891059 is active
	I1018 14:08:39.708895 1760410 main.go:141] libmachine: (addons-891059) getting domain XML...
	I1018 14:08:39.709831 1760410 main.go:141] libmachine: (addons-891059) DBG | starting domain XML:
	I1018 14:08:39.709853 1760410 main.go:141] libmachine: (addons-891059) DBG | <domain type='kvm'>
	I1018 14:08:39.709867 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>addons-891059</name>
	I1018 14:08:39.709876 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>372d9231-4fa4-4480-95fc-5052e6676096</uuid>
	I1018 14:08:39.709886 1760410 main.go:141] libmachine: (addons-891059) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 14:08:39.709894 1760410 main.go:141] libmachine: (addons-891059) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 14:08:39.709903 1760410 main.go:141] libmachine: (addons-891059) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 14:08:39.709907 1760410 main.go:141] libmachine: (addons-891059) DBG |   <os>
	I1018 14:08:39.709920 1760410 main.go:141] libmachine: (addons-891059) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 14:08:39.709930 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='cdrom'/>
	I1018 14:08:39.709943 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='hd'/>
	I1018 14:08:39.709954 1760410 main.go:141] libmachine: (addons-891059) DBG |     <bootmenu enable='no'/>
	I1018 14:08:39.709988 1760410 main.go:141] libmachine: (addons-891059) DBG |   </os>
	I1018 14:08:39.710010 1760410 main.go:141] libmachine: (addons-891059) DBG |   <features>
	I1018 14:08:39.710020 1760410 main.go:141] libmachine: (addons-891059) DBG |     <acpi/>
	I1018 14:08:39.710028 1760410 main.go:141] libmachine: (addons-891059) DBG |     <apic/>
	I1018 14:08:39.710042 1760410 main.go:141] libmachine: (addons-891059) DBG |     <pae/>
	I1018 14:08:39.710052 1760410 main.go:141] libmachine: (addons-891059) DBG |   </features>
	I1018 14:08:39.710065 1760410 main.go:141] libmachine: (addons-891059) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 14:08:39.710080 1760410 main.go:141] libmachine: (addons-891059) DBG |   <clock offset='utc'/>
	I1018 14:08:39.710094 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 14:08:39.710106 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_reboot>restart</on_reboot>
	I1018 14:08:39.710116 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_crash>destroy</on_crash>
	I1018 14:08:39.710124 1760410 main.go:141] libmachine: (addons-891059) DBG |   <devices>
	I1018 14:08:39.710141 1760410 main.go:141] libmachine: (addons-891059) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 14:08:39.710157 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='cdrom'>
	I1018 14:08:39.710174 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw'/>
	I1018 14:08:39.710189 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.710202 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.710213 1760410 main.go:141] libmachine: (addons-891059) DBG |       <readonly/>
	I1018 14:08:39.710241 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 14:08:39.710261 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710268 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='disk'>
	I1018 14:08:39.710278 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 14:08:39.710289 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.710297 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.710304 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 14:08:39.710311 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710317 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 14:08:39.710325 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 14:08:39.710331 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710338 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 14:08:39.710353 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 14:08:39.710359 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 14:08:39.710375 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710394 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710417 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:12:2f:9d'/>
	I1018 14:08:39.710440 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='mk-addons-891059'/>
	I1018 14:08:39.710448 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710453 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 14:08:39.710459 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710463 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710469 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:f4:cf:b8'/>
	I1018 14:08:39.710473 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='default'/>
	I1018 14:08:39.710478 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710499 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 14:08:39.710511 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710529 1760410 main.go:141] libmachine: (addons-891059) DBG |     <serial type='pty'>
	I1018 14:08:39.710546 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='isa-serial' port='0'>
	I1018 14:08:39.710558 1760410 main.go:141] libmachine: (addons-891059) DBG |         <model name='isa-serial'/>
	I1018 14:08:39.710568 1760410 main.go:141] libmachine: (addons-891059) DBG |       </target>
	I1018 14:08:39.710575 1760410 main.go:141] libmachine: (addons-891059) DBG |     </serial>
	I1018 14:08:39.710584 1760410 main.go:141] libmachine: (addons-891059) DBG |     <console type='pty'>
	I1018 14:08:39.710590 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='serial' port='0'/>
	I1018 14:08:39.710597 1760410 main.go:141] libmachine: (addons-891059) DBG |     </console>
	I1018 14:08:39.710602 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='mouse' bus='ps2'/>
	I1018 14:08:39.710611 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 14:08:39.710619 1760410 main.go:141] libmachine: (addons-891059) DBG |     <audio id='1' type='none'/>
	I1018 14:08:39.710635 1760410 main.go:141] libmachine: (addons-891059) DBG |     <memballoon model='virtio'>
	I1018 14:08:39.710650 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 14:08:39.710670 1760410 main.go:141] libmachine: (addons-891059) DBG |     </memballoon>
	I1018 14:08:39.710681 1760410 main.go:141] libmachine: (addons-891059) DBG |     <rng model='virtio'>
	I1018 14:08:39.710688 1760410 main.go:141] libmachine: (addons-891059) DBG |       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.710700 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 14:08:39.710714 1760410 main.go:141] libmachine: (addons-891059) DBG |     </rng>
	I1018 14:08:39.710725 1760410 main.go:141] libmachine: (addons-891059) DBG |   </devices>
	I1018 14:08:39.710731 1760410 main.go:141] libmachine: (addons-891059) DBG | </domain>
	I1018 14:08:39.710744 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:41.127813 1760410 main.go:141] libmachine: (addons-891059) waiting for domain to start...
	I1018 14:08:41.129181 1760410 main.go:141] libmachine: (addons-891059) domain is now running
	I1018 14:08:41.129199 1760410 main.go:141] libmachine: (addons-891059) waiting for IP...
	I1018 14:08:41.130215 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.130734 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.130765 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.131111 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.131182 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.131117 1760437 retry.go:31] will retry after 310.436274ms: waiting for domain to come up
	I1018 14:08:41.443955 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.444643 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.444667 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.444959 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.445013 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.444938 1760437 retry.go:31] will retry after 310.095624ms: waiting for domain to come up
	I1018 14:08:41.756412 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.756912 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.756985 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.757237 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.757264 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.757211 1760437 retry.go:31] will retry after 403.034899ms: waiting for domain to come up
	I1018 14:08:42.161632 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.162259 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.162290 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.162631 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.162653 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.162588 1760437 retry.go:31] will retry after 392.033324ms: waiting for domain to come up
	I1018 14:08:42.555954 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.556467 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.556490 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.556794 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.556833 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.556772 1760437 retry.go:31] will retry after 563.122226ms: waiting for domain to come up
	I1018 14:08:43.121698 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.122213 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.122240 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.122649 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.122673 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.122588 1760437 retry.go:31] will retry after 654.00858ms: waiting for domain to come up
	I1018 14:08:43.778430 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.778988 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.779017 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.779284 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.779359 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.779296 1760437 retry.go:31] will retry after 861.369309ms: waiting for domain to come up
	I1018 14:08:44.642386 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:44.642972 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:44.643001 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:44.643258 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:44.643325 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:44.643266 1760437 retry.go:31] will retry after 1.120629341s: waiting for domain to come up
	I1018 14:08:45.765704 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:45.766202 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:45.766225 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:45.766596 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:45.766622 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:45.766568 1760437 retry.go:31] will retry after 1.280814413s: waiting for domain to come up
	I1018 14:08:47.049323 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:47.049871 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:47.049898 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:47.050228 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:47.050287 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:47.050222 1760437 retry.go:31] will retry after 2.205238568s: waiting for domain to come up
	I1018 14:08:49.257773 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:49.258389 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:49.258419 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:49.258809 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:49.258836 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:49.258779 1760437 retry.go:31] will retry after 2.31868491s: waiting for domain to come up
	I1018 14:08:51.580165 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:51.580745 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:51.580775 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:51.581147 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:51.581179 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:51.581113 1760437 retry.go:31] will retry after 2.275257905s: waiting for domain to come up
	I1018 14:08:53.858516 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:53.859085 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:53.859110 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:53.859415 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:53.859447 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:53.859390 1760437 retry.go:31] will retry after 3.968512343s: waiting for domain to come up
	I1018 14:08:57.829253 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829924 1760410 main.go:141] libmachine: (addons-891059) found domain IP: 192.168.39.100
	I1018 14:08:57.829948 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has current primary IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829954 1760410 main.go:141] libmachine: (addons-891059) reserving static IP address...
	I1018 14:08:57.830357 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find host DHCP lease matching {name: "addons-891059", mac: "52:54:00:12:2f:9d", ip: "192.168.39.100"} in network mk-addons-891059
	I1018 14:08:58.036271 1760410 main.go:141] libmachine: (addons-891059) DBG | Getting to WaitForSSH function...
	I1018 14:08:58.036306 1760410 main.go:141] libmachine: (addons-891059) reserved static IP address 192.168.39.100 for domain addons-891059
	I1018 14:08:58.036334 1760410 main.go:141] libmachine: (addons-891059) waiting for SSH...
	I1018 14:08:58.039556 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040071 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.040113 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040427 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH client type: external
	I1018 14:08:58.040457 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa (-rw-------)
	I1018 14:08:58.040489 1760410 main.go:141] libmachine: (addons-891059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 14:08:58.040505 1760410 main.go:141] libmachine: (addons-891059) DBG | About to run SSH command:
	I1018 14:08:58.040518 1760410 main.go:141] libmachine: (addons-891059) DBG | exit 0
	I1018 14:08:58.178221 1760410 main.go:141] libmachine: (addons-891059) DBG | SSH cmd err, output: <nil>: 
	I1018 14:08:58.178611 1760410 main.go:141] libmachine: (addons-891059) domain creation complete
	I1018 14:08:58.178979 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:58.179725 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.179914 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.180097 1760410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 14:08:58.180117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:08:58.181922 1760410 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 14:08:58.181937 1760410 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 14:08:58.181946 1760410 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 14:08:58.181953 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.184676 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185179 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.185207 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185454 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.185640 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185815 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185930 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.186116 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.186465 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.186483 1760410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 14:08:58.305360 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.305387 1760410 main.go:141] libmachine: Detecting the provisioner...
	I1018 14:08:58.305399 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.308732 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309086 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.309110 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309407 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.309679 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.309898 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.310049 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.310245 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.310526 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.310542 1760410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 14:08:58.429225 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 14:08:58.429329 1760410 main.go:141] libmachine: found compatible host: buildroot
	I1018 14:08:58.429364 1760410 main.go:141] libmachine: Provisioning with buildroot...
	I1018 14:08:58.429383 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429696 1760410 buildroot.go:166] provisioning hostname "addons-891059"
	I1018 14:08:58.429732 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429974 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.433221 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433619 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.433638 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433891 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.434117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434290 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434435 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.434615 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.434828 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.434841 1760410 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-891059 && echo "addons-891059" | sudo tee /etc/hostname
	I1018 14:08:58.571164 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-891059
	
	I1018 14:08:58.571201 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.574587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575023 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.575060 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575255 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.575484 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575818 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.576059 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.576292 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.576310 1760410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-891059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-891059/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-891059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:08:58.705558 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.705593 1760410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 14:08:58.705650 1760410 buildroot.go:174] setting up certificates
	I1018 14:08:58.705677 1760410 provision.go:84] configureAuth start
	I1018 14:08:58.705691 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.706037 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:58.709084 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709428 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.709454 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709701 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.712025 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712527 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.712572 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712679 1760410 provision.go:143] copyHostCerts
	I1018 14:08:58.712765 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 14:08:58.712925 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 14:08:58.713027 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 14:08:58.713099 1760410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.addons-891059 san=[127.0.0.1 192.168.39.100 addons-891059 localhost minikube]
	I1018 14:08:59.195381 1760410 provision.go:177] copyRemoteCerts
	I1018 14:08:59.195454 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:08:59.195481 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.198489 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.198846 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.198881 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.199059 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.199299 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.199483 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.199691 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.292928 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:08:59.325386 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:08:59.357335 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:08:59.389117 1760410 provision.go:87] duration metric: took 683.421516ms to configureAuth
	I1018 14:08:59.389152 1760410 buildroot.go:189] setting minikube options for container-runtime
	I1018 14:08:59.389391 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:08:59.389501 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.392319 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392710 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.392752 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392932 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.393164 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393457 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393687 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.393910 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.394130 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.394146 1760410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:08:59.663506 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:08:59.663540 1760410 main.go:141] libmachine: Checking connection to Docker...
	I1018 14:08:59.663551 1760410 main.go:141] libmachine: (addons-891059) Calling .GetURL
	I1018 14:08:59.665074 1760410 main.go:141] libmachine: (addons-891059) DBG | using libvirt version 8000000
	I1018 14:08:59.668182 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668663 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.668695 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668860 1760410 main.go:141] libmachine: Docker is up and running!
	I1018 14:08:59.668875 1760410 main.go:141] libmachine: Reticulating splines...
	I1018 14:08:59.668883 1760410 client.go:171] duration metric: took 21.185236601s to LocalClient.Create
	I1018 14:08:59.668913 1760410 start.go:167] duration metric: took 21.185315141s to libmachine.API.Create "addons-891059"
	I1018 14:08:59.668930 1760410 start.go:293] postStartSetup for "addons-891059" (driver="kvm2")
	I1018 14:08:59.668947 1760410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:08:59.668967 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.669233 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:08:59.669269 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.671533 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.671957 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.671985 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.672144 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.672364 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.672523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.672667 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.764031 1760410 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:08:59.769115 1760410 info.go:137] Remote host: Buildroot 2025.02
	I1018 14:08:59.769146 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 14:08:59.769224 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 14:08:59.769248 1760410 start.go:296] duration metric: took 100.307576ms for postStartSetup
	I1018 14:08:59.769292 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:59.769961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.773479 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.773901 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.773934 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.774210 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:59.774465 1760410 start.go:128] duration metric: took 21.309794025s to createHost
	I1018 14:08:59.774492 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.777128 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777506 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.777535 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777745 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.777961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.778500 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.778740 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.778756 1760410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 14:08:59.897254 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760796539.858103251
	
	I1018 14:08:59.897279 1760410 fix.go:216] guest clock: 1760796539.858103251
	I1018 14:08:59.897287 1760410 fix.go:229] Guest: 2025-10-18 14:08:59.858103251 +0000 UTC Remote: 2025-10-18 14:08:59.774480854 +0000 UTC m=+21.430607980 (delta=83.622397ms)
	I1018 14:08:59.897336 1760410 fix.go:200] guest clock delta is within tolerance: 83.622397ms
	I1018 14:08:59.897364 1760410 start.go:83] releasing machines lock for "addons-891059", held for 21.432776387s
	I1018 14:08:59.897398 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.897684 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.901076 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901487 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.901521 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901705 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902565 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902783 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902886 1760410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:08:59.902954 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.903079 1760410 ssh_runner.go:195] Run: cat /version.json
	I1018 14:08:59.903102 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.906580 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.906633 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907079 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907125 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907149 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907167 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907386 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907427 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907642 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907647 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907824 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.907846 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.908031 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.908099 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.992932 1760410 ssh_runner.go:195] Run: systemctl --version
	I1018 14:09:00.021820 1760410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:09:00.183446 1760410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:09:00.190803 1760410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:09:00.190911 1760410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:09:00.213058 1760410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:09:00.213091 1760410 start.go:495] detecting cgroup driver to use...
	I1018 14:09:00.213178 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:09:00.233624 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:09:00.252522 1760410 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:09:00.252617 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:09:00.272205 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:09:00.289717 1760410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:09:00.439992 1760410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:09:00.649208 1760410 docker.go:234] disabling docker service ...
	I1018 14:09:00.649292 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:09:00.666373 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:09:00.682992 1760410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:09:00.835422 1760410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:09:00.982700 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:09:00.999428 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:09:01.024799 1760410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:09:01.024906 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.038654 1760410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 14:09:01.038752 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.052374 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.066305 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.080191 1760410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:09:01.094600 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.108084 1760410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.131069 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.144608 1760410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:09:01.156726 1760410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:09:01.156791 1760410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:09:01.180230 1760410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:09:01.193680 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:01.335791 1760410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:09:01.461561 1760410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:09:01.461683 1760410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:09:01.467775 1760410 start.go:563] Will wait 60s for crictl version
	I1018 14:09:01.467870 1760410 ssh_runner.go:195] Run: which crictl
	I1018 14:09:01.472812 1760410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 14:09:01.516410 1760410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 14:09:01.516518 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.548303 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.582529 1760410 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 14:09:01.583814 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:09:01.588147 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588628 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:01.588667 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588973 1760410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 14:09:01.594159 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:01.610280 1760410 kubeadm.go:883] updating cluster {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:09:01.610462 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:09:01.610527 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:01.648777 1760410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 14:09:01.648866 1760410 ssh_runner.go:195] Run: which lz4
	I1018 14:09:01.653595 1760410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 14:09:01.658875 1760410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 14:09:01.658909 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 14:09:03.215465 1760410 crio.go:462] duration metric: took 1.561899205s to copy over tarball
	I1018 14:09:03.215548 1760410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 14:09:04.890701 1760410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.675118935s)
	I1018 14:09:04.890741 1760410 crio.go:469] duration metric: took 1.675237586s to extract the tarball
	I1018 14:09:04.890755 1760410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 14:09:04.933819 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:04.980242 1760410 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:09:04.980269 1760410 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:09:04.980277 1760410 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1018 14:09:04.980412 1760410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-891059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:09:04.980487 1760410 ssh_runner.go:195] Run: crio config
	I1018 14:09:05.031493 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:05.031532 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:05.031561 1760410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:09:05.031594 1760410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-891059 NodeName:addons-891059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:09:05.031791 1760410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-891059"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:09:05.031889 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:09:05.045249 1760410 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:09:05.045322 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:09:05.057594 1760410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1018 14:09:05.079304 1760410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:09:05.101229 1760410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1018 14:09:05.123379 1760410 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1018 14:09:05.128149 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:05.144740 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:05.287867 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:05.310139 1760410 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059 for IP: 192.168.39.100
	I1018 14:09:05.310175 1760410 certs.go:195] generating shared ca certs ...
	I1018 14:09:05.310203 1760410 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.310412 1760410 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 14:09:05.928678 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt ...
	I1018 14:09:05.928717 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt: {Name:mk48305fdb94e31a92b48facef68eec843776b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.928918 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key ...
	I1018 14:09:05.928931 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key: {Name:mk701e118ad43b61f158a839f73ec6b965102354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.929018 1760410 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 14:09:06.043454 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt ...
	I1018 14:09:06.043488 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt: {Name:mk77ddeb4af674721966c75040f4f1fb5d69023d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043679 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key ...
	I1018 14:09:06.043694 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key: {Name:mk65d64f37c13d41fae5e3b77d20098229c0b1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043772 1760410 certs.go:257] generating profile certs ...
	I1018 14:09:06.043835 1760410 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key
	I1018 14:09:06.043862 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt with IP's: []
	I1018 14:09:06.259815 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt ...
	I1018 14:09:06.259852 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: {Name:mk812f759d940b265a8e60c894cb050949fd9e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260037 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key ...
	I1018 14:09:06.260054 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key: {Name:mk50fce6a65f5d969bea0e1a48d418e711ccdfe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260134 1760410 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa
	I1018 14:09:06.260154 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I1018 14:09:06.486406 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa ...
	I1018 14:09:06.486442 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa: {Name:mk13f44e79eaa89077b52da6090b647e00b64732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486629 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa ...
	I1018 14:09:06.486643 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa: {Name:mkbe94bfad32eaf986c1751799d5eb527ff32552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486733 1760410 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt
	I1018 14:09:06.486836 1760410 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key
	I1018 14:09:06.486900 1760410 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key
	I1018 14:09:06.486924 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt with IP's: []
	I1018 14:09:06.798152 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt ...
	I1018 14:09:06.798201 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt: {Name:mk29883864de081c2ef5f64c49afd825bbef9059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798410 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key ...
	I1018 14:09:06.798426 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key: {Name:mk619e894bc6a3076fe0e333221023492d7ff3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798649 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 14:09:06.798690 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:09:06.798715 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:09:06.798735 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 14:09:06.799486 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:09:06.845692 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:09:06.882745 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:09:06.918371 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 14:09:06.952411 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:09:06.985595 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:09:07.018257 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:09:07.051475 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:09:07.086174 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:09:07.118849 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:09:07.141590 1760410 ssh_runner.go:195] Run: openssl version
	I1018 14:09:07.148896 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:09:07.163684 1760410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169573 1760410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169638 1760410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.177781 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:09:07.192577 1760410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:09:07.199705 1760410 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:09:07.199768 1760410 kubeadm.go:400] StartCluster: {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:09:07.199879 1760410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:09:07.199953 1760410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:09:07.241737 1760410 cri.go:89] found id: ""
	I1018 14:09:07.241827 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:09:07.254574 1760410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:09:07.267441 1760410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:09:07.280136 1760410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:09:07.280159 1760410 kubeadm.go:157] found existing configuration files:
	
	I1018 14:09:07.280207 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:09:07.292712 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:09:07.292791 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:09:07.305268 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:09:07.317524 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:09:07.317645 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:09:07.330484 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.342579 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:09:07.342663 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.355673 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:09:07.367952 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:09:07.368036 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:09:07.381331 1760410 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 14:09:07.547925 1760410 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:09:20.098002 1760410 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:09:20.098063 1760410 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:09:20.098145 1760410 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:09:20.098299 1760410 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:09:20.098447 1760410 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:09:20.098529 1760410 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:09:20.100393 1760410 out.go:252]   - Generating certificates and keys ...
	I1018 14:09:20.100495 1760410 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:09:20.100629 1760410 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:09:20.100764 1760410 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:09:20.100857 1760410 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:09:20.100964 1760410 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:09:20.101051 1760410 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:09:20.101129 1760410 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:09:20.101315 1760410 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101405 1760410 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:09:20.101571 1760410 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101672 1760410 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:09:20.101744 1760410 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:09:20.101795 1760410 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:09:20.101843 1760410 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:09:20.101896 1760410 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:09:20.101961 1760410 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:09:20.102011 1760410 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:09:20.102082 1760410 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:09:20.102127 1760410 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:09:20.102199 1760410 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:09:20.102260 1760410 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:09:20.103813 1760410 out.go:252]   - Booting up control plane ...
	I1018 14:09:20.103893 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:09:20.103954 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:09:20.104007 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:09:20.104089 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:09:20.104181 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:09:20.104334 1760410 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:09:20.104446 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:09:20.104482 1760410 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:09:20.104625 1760410 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:09:20.104745 1760410 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:09:20.104820 1760410 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50245312s
	I1018 14:09:20.104902 1760410 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:09:20.104976 1760410 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.100:8443/livez
	I1018 14:09:20.105057 1760410 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:09:20.105126 1760410 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:09:20.105186 1760410 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.213660902s
	I1018 14:09:20.105249 1760410 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.327835251s
	I1018 14:09:20.105309 1760410 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50283692s
	I1018 14:09:20.105410 1760410 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:09:20.105516 1760410 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:09:20.105572 1760410 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:09:20.105752 1760410 kubeadm.go:318] [mark-control-plane] Marking the node addons-891059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:09:20.105817 1760410 kubeadm.go:318] [bootstrap-token] Using token: ci4c4o.8llcllq96muz9osf
	I1018 14:09:20.108036 1760410 out.go:252]   - Configuring RBAC rules ...
	I1018 14:09:20.108126 1760410 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:09:20.108210 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:09:20.108332 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:09:20.108465 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:09:20.108571 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:09:20.108668 1760410 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:09:20.108821 1760410 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:09:20.108863 1760410 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:09:20.108900 1760410 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:09:20.108911 1760410 kubeadm.go:318] 
	I1018 14:09:20.108961 1760410 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:09:20.108967 1760410 kubeadm.go:318] 
	I1018 14:09:20.109026 1760410 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:09:20.109031 1760410 kubeadm.go:318] 
	I1018 14:09:20.109051 1760410 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:09:20.109098 1760410 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:09:20.109140 1760410 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:09:20.109146 1760410 kubeadm.go:318] 
	I1018 14:09:20.109214 1760410 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:09:20.109221 1760410 kubeadm.go:318] 
	I1018 14:09:20.109258 1760410 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:09:20.109264 1760410 kubeadm.go:318] 
	I1018 14:09:20.109311 1760410 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:09:20.109381 1760410 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:09:20.109469 1760410 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:09:20.109488 1760410 kubeadm.go:318] 
	I1018 14:09:20.109554 1760410 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:09:20.109622 1760410 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:09:20.109628 1760410 kubeadm.go:318] 
	I1018 14:09:20.109698 1760410 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.109796 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 \
	I1018 14:09:20.109908 1760410 kubeadm.go:318] 	--control-plane 
	I1018 14:09:20.109934 1760410 kubeadm.go:318] 
	I1018 14:09:20.110067 1760410 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:09:20.110077 1760410 kubeadm.go:318] 
	I1018 14:09:20.110176 1760410 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.110279 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 
	I1018 14:09:20.110293 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:20.110301 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:20.111886 1760410 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 14:09:20.113016 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 14:09:20.127933 1760410 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 14:09:20.158289 1760410 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:09:20.158398 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.158416 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-891059 minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-891059 minikube.k8s.io/primary=true
	I1018 14:09:20.315678 1760410 ops.go:34] apiserver oom_adj: -16
	I1018 14:09:20.315834 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.816073 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.316085 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.816909 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.316182 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.816708 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.316221 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.816476 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.316683 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.414532 1760410 kubeadm.go:1113] duration metric: took 4.256222081s to wait for elevateKubeSystemPrivileges
	I1018 14:09:24.414583 1760410 kubeadm.go:402] duration metric: took 17.214819054s to StartCluster
	I1018 14:09:24.414614 1760410 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.414803 1760410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:09:24.415376 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.415641 1760410 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:09:24.415700 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:09:24.415754 1760410 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:09:24.415887 1760410 addons.go:69] Setting yakd=true in profile "addons-891059"
	I1018 14:09:24.415896 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.415930 1760410 addons.go:238] Setting addon yakd=true in "addons-891059"
	I1018 14:09:24.415941 1760410 addons.go:69] Setting registry-creds=true in profile "addons-891059"
	I1018 14:09:24.415953 1760410 addons.go:238] Setting addon registry-creds=true in "addons-891059"
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.415979 1760410 addons.go:69] Setting volcano=true in profile "addons-891059"
	I1018 14:09:24.415983 1760410 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-891059"
	I1018 14:09:24.415991 1760410 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.415998 1760410 addons.go:69] Setting volumesnapshots=true in profile "addons-891059"
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-891059"
	I1018 14:09:24.415959 1760410 addons.go:69] Setting inspektor-gadget=true in profile "addons-891059"
	I1018 14:09:24.416026 1760410 addons.go:69] Setting storage-provisioner=true in profile "addons-891059"
	I1018 14:09:24.416035 1760410 addons.go:238] Setting addon storage-provisioner=true in "addons-891059"
	I1018 14:09:24.415990 1760410 addons.go:238] Setting addon volcano=true in "addons-891059"
	I1018 14:09:24.416051 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416063 1760410 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.416073 1760410 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-891059"
	I1018 14:09:24.416105 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416110 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416136 1760410 addons.go:69] Setting metrics-server=true in profile "addons-891059"
	I1018 14:09:24.416172 1760410 addons.go:238] Setting addon metrics-server=true in "addons-891059"
	I1018 14:09:24.416211 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416266 1760410 addons.go:69] Setting registry=true in profile "addons-891059"
	I1018 14:09:24.416290 1760410 addons.go:238] Setting addon registry=true in "addons-891059"
	I1018 14:09:24.416318 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416454 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416462 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416496 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416504 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416536 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416546 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416565 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416634 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416702 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon volumesnapshots=true in "addons-891059"
	I1018 14:09:24.416740 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416750 1760410 addons.go:69] Setting cloud-spanner=true in profile "addons-891059"
	I1018 14:09:24.416761 1760410 addons.go:238] Setting addon cloud-spanner=true in "addons-891059"
	I1018 14:09:24.416772 1760410 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-891059"
	I1018 14:09:24.416738 1760410 addons.go:69] Setting gcp-auth=true in profile "addons-891059"
	I1018 14:09:24.416797 1760410 mustload.go:65] Loading cluster: addons-891059
	I1018 14:09:24.416803 1760410 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:24.416808 1760410 addons.go:69] Setting ingress-dns=true in profile "addons-891059"
	I1018 14:09:24.416054 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416816 1760410 addons.go:69] Setting default-storageclass=true in profile "addons-891059"
	I1018 14:09:24.416827 1760410 addons.go:69] Setting ingress=true in profile "addons-891059"
	I1018 14:09:24.416838 1760410 addons.go:238] Setting addon ingress=true in "addons-891059"
	I1018 14:09:24.416838 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-891059"
	I1018 14:09:24.416009 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-891059"
	I1018 14:09:24.416036 1760410 addons.go:238] Setting addon inspektor-gadget=true in "addons-891059"
	I1018 14:09:24.416819 1760410 addons.go:238] Setting addon ingress-dns=true in "addons-891059"
	I1018 14:09:24.417180 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417202 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417220 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417277 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417301 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417457 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417670 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417700 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417772 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417855 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417889 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417365 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418030 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418152 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.418393 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418444 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418552 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418624 1760410 out.go:179] * Verifying Kubernetes components...
	I1018 14:09:24.418907 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418967 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.422570 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422950 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.423390 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.423424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.425453 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:24.428788 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.428847 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.432739 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.432818 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.446515 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I1018 14:09:24.447603 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.448044 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I1018 14:09:24.448620 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.449130 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.449150 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450319 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.450375 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450390 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452314 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452974 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.453024 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.455440 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I1018 14:09:24.456592 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.456640 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.459616 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I1018 14:09:24.459757 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I1018 14:09:24.459794 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I1018 14:09:24.460277 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.460735 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I1018 14:09:24.460955 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463457 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463624 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463650 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.463943 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463970 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.464096 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.464766 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.464811 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.466143 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.466259 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.466646 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.467503 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.467526 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.468700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.468724 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.469056 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.469102 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.469455 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.469522 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.470074 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.470106 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.470616 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.470636 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.471024 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1018 14:09:24.471853 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.472590 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.472616 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.473010 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.473088 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473315 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473750 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I1018 14:09:24.474289 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.474360 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.474951 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.477612 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I1018 14:09:24.478762 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.479308 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.479333 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.479844 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.480258 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.480895 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.482303 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1018 14:09:24.483440 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.483700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483715 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.483863 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483872 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.484222 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.484556 1760410 addons.go:238] Setting addon default-storageclass=true in "addons-891059"
	I1018 14:09:24.484598 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.484735 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.484774 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.484961 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.485003 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.485644 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.486185 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.486221 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.488758 1760410 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-891059"
	I1018 14:09:24.488809 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489181 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.489230 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.489519 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489701 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I1018 14:09:24.494198 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1018 14:09:24.495236 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.496047 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.496066 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.496101 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I1018 14:09:24.496638 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I1018 14:09:24.496952 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.497036 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1018 14:09:24.497223 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497670 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497914 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.498318 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1018 14:09:24.498718 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498744 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499070 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499580 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.499603 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499631 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.499736 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.500137 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.500171 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500183 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500231 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500253 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500704 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.500747 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501004 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501037 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.501047 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.501852 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501890 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.505372 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.505855 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508424 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.508460 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508580 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.509093 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.509143 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.510293 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I1018 14:09:24.510851 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.511364 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.512160 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.512181 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.512251 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.513848 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:09:24.513854 1760410 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:09:24.515867 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:09:24.515885 1760410 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:09:24.515912 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.516312 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.517033 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.517295 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.517359 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519170 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519288 1760410 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:09:24.520436 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:24.520516 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:09:24.520549 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521274 1760410 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:24.521295 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:09:24.521320 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521822 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1018 14:09:24.522725 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.523307 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.523325 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.523932 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.524192 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.527503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.527590 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527618 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.527649 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1018 14:09:24.528451 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.528456 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.528513 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.528706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.528847 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.529262 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.529279 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.529677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.529956 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.530621 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I1018 14:09:24.531189 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.531587 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1018 14:09:24.532552 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.532587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.533165 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.533199 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.534272 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.534329 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.534670 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1018 14:09:24.534888 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.534927 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.534934 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.535018 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I1018 14:09:24.535456 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536405 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.536423 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.536459 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536498 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.536522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.536586 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.536638 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.536641 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536797 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536878 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.537335 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.537386 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.537814 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I1018 14:09:24.537939 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538069 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.538085 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.538431 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538510 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.538875 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.539073 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.539143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:09:24.540559 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.540650 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.540661 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:09:24.540789 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1018 14:09:24.541394 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541512 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.541542 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.541580 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.542392 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.542582 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.542593 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.541968 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541995 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.542027 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.541787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.542477 1760410 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:24.542769 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:09:24.542787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.543139 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.543258 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:09:24.543232 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.543329 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.544059 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.544119 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.544691 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.544728 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.545623 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.545670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.547151 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:09:24.547560 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.548774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.548901 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:09:24.549486 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.549513 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.549520 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1018 14:09:24.549555 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.549743 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.549944 1760410 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:09:24.549986 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:09:24.550111 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.550462 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.550548 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.551322 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:09:24.551448 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:09:24.551471 1760410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:09:24.551503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.552417 1760410 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:09:24.552611 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.552668 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.552694 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.553138 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.553466 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:09:24.553546 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:09:24.553557 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:09:24.553575 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.555796 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:09:24.556091 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.556537 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.559463 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:09:24.560143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I1018 14:09:24.560689 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:09:24.560709 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:09:24.560733 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.561360 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.562223 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.562248 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.562334 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1018 14:09:24.564735 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564798 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.564809 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.564889 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564947 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1018 14:09:24.565207 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.565656 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.565686 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.565804 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.565867 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.566012 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.566138 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.566251 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.566837 1760410 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:09:24.566841 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.566954 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.567074 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.567098 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.567382 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.567544 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.567609 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.567849 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.568018 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.568167 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.568390 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:24.568518 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:09:24.568539 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.568408 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.569303 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.569321 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.569601 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1018 14:09:24.569798 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.569904 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I1018 14:09:24.570247 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570534 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570627 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.570989 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.571754 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.571776 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.571809 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.571835 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.571888 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.571942 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.572034 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I1018 14:09:24.572101 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572114 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.572301 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.572420 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.572512 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.572532 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.572545 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:24.572552 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572560 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.573079 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.573081 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.573095 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.573102 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.573108 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.573114 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:24.573205 1760410 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:09:24.573206 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.573377 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.573909 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.574598 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.574613 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.574986 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.575284 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.575403 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.576055 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I1018 14:09:24.576282 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.576635 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.576750 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577145 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.577164 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.577387 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577425 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578449 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578485 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:09:24.578527 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.578725 1760410 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:24.578741 1760410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:09:24.578760 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.578783 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.579845 1760410 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:09:24.579890 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:09:24.579901 1760410 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:09:24.579916 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.579866 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.579966 1760410 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:09:24.581298 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.581518 1760410 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:09:24.581555 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:24.581566 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:09:24.581582 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.581701 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:09:24.581733 1760410 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:09:24.581762 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582432 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.582611 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.582663 1760410 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:24.582679 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:09:24.582698 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582744 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.583429 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.583635 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.583761 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.583832 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.584362 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I1018 14:09:24.584568 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:09:24.585155 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.585916 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.585938 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.586019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.586361 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:24.586383 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:09:24.586403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.586683 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.586913 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.587506 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587537 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.587565 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587802 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.587988 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.588388 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.588708 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.588631 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.588734 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.589129 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.589325 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.589522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.590171 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.590296 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.590321 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.590811 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591126 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591174 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591319 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.591484 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.591739 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591761 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591773 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.591922 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592011 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592200 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592253 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.592273 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.592387 1760410 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:09:24.592403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592465 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592624 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592714 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592859 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592993 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.593164 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593741 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.593774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593963 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.594146 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.594295 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.594464 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.595795 1760410 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:09:24.597040 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:24.597063 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:09:24.597082 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.600612 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.600998 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.601019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.601363 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.601584 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.601753 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.601908 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	W1018 14:09:24.742102 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.742153 1760410 retry.go:31] will retry after 155.166839ms: ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	W1018 14:09:24.905499 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.905539 1760410 retry.go:31] will retry after 290.251665ms: ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:25.195583 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:09:25.195661 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:25.238678 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:09:25.238705 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:09:25.239580 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:25.243439 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:25.244497 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:25.264037 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:25.312273 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:25.315550 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:09:25.315578 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:09:25.320939 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:25.324940 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:09:25.324962 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:09:25.327771 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:25.328434 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:25.339706 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:09:25.339737 1760410 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:09:25.369886 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:09:25.369914 1760410 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:09:25.370459 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:25.537261 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:09:25.537300 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:09:25.585100 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:09:25.585145 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:09:25.685376 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:09:25.685407 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:09:25.768517 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:09:25.768553 1760410 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:09:25.768978 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:25.769004 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:09:25.814134 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:25.814164 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:09:25.853698 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:25.853731 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:09:26.014188 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:09:26.014222 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:09:26.060465 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:09:26.060498 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:09:26.091905 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:09:26.091940 1760410 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:09:26.114081 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:26.248999 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:26.271395 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:26.432032 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:09:26.432068 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:09:26.436207 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:09:26.436242 1760410 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:09:26.558205 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:26.558233 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:09:26.717226 1760410 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:26.717268 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:09:26.717225 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:09:26.717386 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:09:26.825284 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:27.137937 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:09:27.137970 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:09:27.440610 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:27.873332 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:09:27.873382 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:09:28.056527 1760410 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.860893783s)
	I1018 14:09:28.056563 1760410 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 14:09:28.056618 1760410 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.860884504s)
	I1018 14:09:28.056693 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.817081387s)
	I1018 14:09:28.056751 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056765 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.056766 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.813291284s)
	I1018 14:09:28.056811 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056828 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057259 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057276 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057280 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057300 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057326 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057416 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057439 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057482 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057493 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057712 1760410 node_ready.go:35] waiting up to 6m0s for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.057737 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057777 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057784 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057851 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057951 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057965 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.062488 1760410 node_ready.go:49] node "addons-891059" is "Ready"
	I1018 14:09:28.062522 1760410 node_ready.go:38] duration metric: took 4.780102ms for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.062537 1760410 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:09:28.062602 1760410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:09:28.633793 1760410 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-891059" context rescaled to 1 replicas
	I1018 14:09:28.657122 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:09:28.657153 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:09:29.297640 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:09:29.297673 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:09:29.722108 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:29.722138 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:09:30.201846 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:31.747160 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.502603848s)
	I1018 14:09:31.747234 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747249 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747635 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.747662 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.747675 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747685 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.748000 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.989912 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:09:31.989960 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:31.993852 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994463 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:31.994498 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994763 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:31.995004 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:31.995210 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:31.995372 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:32.401099 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:09:32.582819 1760410 addons.go:238] Setting addon gcp-auth=true in "addons-891059"
	I1018 14:09:32.582898 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:32.583276 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.583338 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.598366 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1018 14:09:32.598979 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.599565 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.599588 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.599990 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.600582 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.600654 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.615909 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1018 14:09:32.616524 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.616999 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.617024 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.617441 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.617696 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:32.619651 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:32.619882 1760410 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:09:32.619905 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:32.623262 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.623788 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:32.623815 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.624039 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:32.624251 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:32.624440 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:32.624678 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:34.410431 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.146350667s)
	I1018 14:09:34.410505 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410520 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410535 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.098229729s)
	I1018 14:09:34.410591 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410608 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410627 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.08966013s)
	I1018 14:09:34.410671 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410688 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410780 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.082972673s)
	I1018 14:09:34.410825 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410842 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410885 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.082422149s)
	I1018 14:09:34.410912 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410921 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410996 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.040510674s)
	I1018 14:09:34.411019 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411040 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411044 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411064 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411075 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411083 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411111 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411122 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.29701229s)
	I1018 14:09:34.411143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411148 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411161 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411170 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411178 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411185 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411186 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411194 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411202 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411209 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411237 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.162212378s)
	W1018 14:09:34.411260 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411279 1760410 retry.go:31] will retry after 156.548971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411277 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411304 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411320 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411329 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411355 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411385 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.139958439s)
	I1018 14:09:34.411415 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411426 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411451 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.586135977s)
	I1018 14:09:34.411563 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411581 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411476 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413776 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413792 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413803 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413813 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413821 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413830 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413837 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413839 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413857 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413878 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413884 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413892 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413899 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413949 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413963 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413984 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413993 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414003 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414010 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.414017 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.414067 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414253 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414280 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414288 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414297 1760410 addons.go:479] Verifying addon metrics-server=true in "addons-891059"
	I1018 14:09:34.414448 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414488 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414509 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414541 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.415992 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416015 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416023 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416037 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416049 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416063 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.415991 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416140 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416177 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416185 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416194 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.416025 1760410 addons.go:479] Verifying addon ingress=true in "addons-891059"
	I1018 14:09:34.416625 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416635 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413977 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416602 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416980 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416993 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418102 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.418150 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.418163 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418177 1760410 addons.go:479] Verifying addon registry=true in "addons-891059"
	I1018 14:09:34.418831 1760410 out.go:179] * Verifying ingress addon...
	I1018 14:09:34.418835 1760410 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-891059 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:09:34.420852 1760410 out.go:179] * Verifying registry addon...
	I1018 14:09:34.422521 1760410 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:09:34.423238 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:09:34.503158 1760410 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:09:34.503192 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.503257 1760410 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:09:34.503271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.568542 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:34.621858 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.621880 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.622193 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.622248 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.622262 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:34.622394 1760410 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:09:34.659969 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.659996 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.660315 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.660316 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.660354 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.941419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.942360 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.990391 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.549686758s)
	I1018 14:09:34.990429 1760410 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.927791238s)
	I1018 14:09:34.990461 1760410 api_server.go:72] duration metric: took 10.57479054s to wait for apiserver process to appear ...
	W1018 14:09:34.990458 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990494 1760410 retry.go:31] will retry after 178.461593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990467 1760410 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:09:34.990545 1760410 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1018 14:09:35.010676 1760410 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1018 14:09:35.013686 1760410 api_server.go:141] control plane version: v1.34.1
	I1018 14:09:35.013719 1760410 api_server.go:131] duration metric: took 23.188895ms to wait for apiserver health ...
	I1018 14:09:35.013750 1760410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:09:35.060072 1760410 system_pods.go:59] 16 kube-system pods found
	I1018 14:09:35.060119 1760410 system_pods.go:61] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.060127 1760410 system_pods.go:61] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060138 1760410 system_pods.go:61] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060145 1760410 system_pods.go:61] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.060149 1760410 system_pods.go:61] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.060152 1760410 system_pods.go:61] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.060157 1760410 system_pods.go:61] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.060160 1760410 system_pods.go:61] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.060163 1760410 system_pods.go:61] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.060168 1760410 system_pods.go:61] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.060178 1760410 system_pods.go:61] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.060186 1760410 system_pods.go:61] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.060194 1760410 system_pods.go:61] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.060203 1760410 system_pods.go:61] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.060209 1760410 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.060218 1760410 system_pods.go:61] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.060229 1760410 system_pods.go:74] duration metric: took 46.469158ms to wait for pod list to return data ...
	I1018 14:09:35.060248 1760410 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:09:35.104632 1760410 default_sa.go:45] found service account: "default"
	I1018 14:09:35.104663 1760410 default_sa.go:55] duration metric: took 44.40546ms for default service account to be created ...
	I1018 14:09:35.104677 1760410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:09:35.169265 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:35.176957 1760410 system_pods.go:86] 17 kube-system pods found
	I1018 14:09:35.177007 1760410 system_pods.go:89] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.177019 1760410 system_pods.go:89] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177052 1760410 system_pods.go:89] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177068 1760410 system_pods.go:89] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.177079 1760410 system_pods.go:89] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.177087 1760410 system_pods.go:89] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.177100 1760410 system_pods.go:89] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.177106 1760410 system_pods.go:89] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.177117 1760410 system_pods.go:89] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.177125 1760410 system_pods.go:89] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.177134 1760410 system_pods.go:89] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.177145 1760410 system_pods.go:89] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.177156 1760410 system_pods.go:89] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.177171 1760410 system_pods.go:89] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.177180 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.177187 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzhfk" [f3e3fb2c-05b7-448d-bca6-3438d70868b1] Pending
	I1018 14:09:35.177198 1760410 system_pods.go:89] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.177213 1760410 system_pods.go:126] duration metric: took 72.526149ms to wait for k8s-apps to be running ...
	I1018 14:09:35.177228 1760410 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:09:35.177303 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:09:35.445832 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.461317 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:35.939729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.942319 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.445234 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.243330128s)
	I1018 14:09:36.445310 1760410 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.825399752s)
	I1018 14:09:36.445314 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445449 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.445853 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:36.445924 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.445941 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.445953 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445962 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.446272 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.446292 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.446304 1760410 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:36.447257 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:09:36.448070 1760410 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:09:36.449546 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:36.450329 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:09:36.450870 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:09:36.450894 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:09:36.458277 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.471857 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.484451 1760410 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:09:36.484481 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:36.597464 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:09:36.597499 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:09:36.732996 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.733028 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:09:36.885741 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.948270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.948391 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.960478 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.436446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.439412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.456938 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.927403 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.928102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.956527 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.404132 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.835532164s)
	W1018 14:09:38.404196 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.404224 1760410 retry.go:31] will retry after 203.009637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.433864 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.434743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.531382 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.607892 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:38.751077 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.58176118s)
	I1018 14:09:38.751130 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751161 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751178 1760410 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.573842033s)
	I1018 14:09:38.751219 1760410 system_svc.go:56] duration metric: took 3.573986856s WaitForService to wait for kubelet
	I1018 14:09:38.751238 1760410 kubeadm.go:586] duration metric: took 14.335564787s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:09:38.751274 1760410 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:09:38.751483 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.751506 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751516 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.751529 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751536 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751791 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751808 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.851019 1760410 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 14:09:38.851051 1760410 node_conditions.go:123] node cpu capacity is 2
	I1018 14:09:38.851069 1760410 node_conditions.go:105] duration metric: took 99.788234ms to run NodePressure ...
	I1018 14:09:38.851086 1760410 start.go:241] waiting for startup goroutines ...
	I1018 14:09:38.908065 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.022268979s)
	I1018 14:09:38.908143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908165 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908474 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908500 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908510 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908518 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908801 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908819 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908845 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.909928 1760410 addons.go:479] Verifying addon gcp-auth=true in "addons-891059"
	I1018 14:09:38.911794 1760410 out.go:179] * Verifying gcp-auth addon...
	I1018 14:09:38.913871 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:09:38.969859 1760410 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:09:38.969881 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:38.979126 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.979302 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.999385 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.427914 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.428338 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.431173 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.465614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.930950 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.936675 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.942841 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.965308 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.421639 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.429893 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:40.429965 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:40.457177 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.676324 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.068378617s)
	W1018 14:09:40.676402 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.676434 1760410 retry.go:31] will retry after 741.361151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.925104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.933643 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.024046 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.027134 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.418785 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:41.422791 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.437450 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.437815 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.458160 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.920933 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.931994 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.932787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.954074 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.420874 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.427884 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.432996 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.455566 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.935811 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.935897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.936364 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.948192 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529349883s)
	W1018 14:09:42.948266 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.948305 1760410 retry.go:31] will retry after 603.252738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.961547 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.421694 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.425963 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.432125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.454728 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.552443 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:43.920168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.926196 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.932562 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.954780 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.418856 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.434761 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.434815 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.485100 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.719803 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.167302475s)
	W1018 14:09:44.719876 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.719906 1760410 retry.go:31] will retry after 756.582939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.919572 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.929974 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.930622 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.954972 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.419454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.431537 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.435706 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.458249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.477327 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:45.921959 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.932928 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.933443 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.960253 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.424197 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.434428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.437611 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.457951 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.721183 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243789601s)
	W1018 14:09:46.721253 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.721284 1760410 retry.go:31] will retry after 1.22541109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.920063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.927281 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.930483 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.954658 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.422281 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.427164 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.431758 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.456565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.926249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.939833 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.940075 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.946922 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:47.966036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.420073 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.432202 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.434126 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.457282 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.920393 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.930362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.932858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.957018 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:49.201980 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255004165s)
	W1018 14:09:49.202036 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.202059 1760410 retry.go:31] will retry after 2.58897953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.420911 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:49.428333 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:49.430869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:49.457131 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.368228 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.377051 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.476106 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.476372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.479024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.479966 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.920534 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.935331 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.938361 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.961186 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.424118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.430809 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.432102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.455044 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.791362 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:51.922858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.934999 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.935987 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.958913 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.642039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.642370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.644501 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.644727 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.918752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.926588 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.930871 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.956219 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.183831 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392411457s)
	W1018 14:09:53.183895 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.183924 1760410 retry.go:31] will retry after 4.131889795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.417891 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.426911 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.428495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.454047 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.919491 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.929299 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.929427 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.958043 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.418456 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.427470 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.427657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.456313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.919925 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.927822 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.928397 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.955119 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.419222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.429271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.430752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.455541 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.918460 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.928654 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.930176 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.958687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.417289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.426666 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.426937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.456516 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.921455 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.931545 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.932200 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.957601 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.316649 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:57.422032 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.435023 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.437778 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.455440 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.921161 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.929313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.929394 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.955970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.423288 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.439731 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.440095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.786495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.919590 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.930253 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.932272 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.957912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.980642 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.663942768s)
	W1018 14:09:58.980696 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:58.980722 1760410 retry.go:31] will retry after 6.037644719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:59.421401 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.428863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.429465 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.458445 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:59.918316 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.928753 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.928856 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.955245 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.418136 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.427048 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.428214 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.457368 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.919392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.929649 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.931313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.959561 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.420084 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.426435 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.428419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.463886 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.918664 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.927921 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.927979 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.954513 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.417929 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.426037 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.428261 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.455407 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.922146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.928949 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.933375 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.956535 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.420697 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.429208 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.432897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.459039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.918554 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.926959 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.927105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.955657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.418489 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.430359 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.430521 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.456644 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.918502 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.930599 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.930923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.956737 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.018763 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:05.417681 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.428004 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.429827 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.456781 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.917569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.926923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.928124 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.957076 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.036566 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.017738492s)
	W1018 14:10:06.036634 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.036662 1760410 retry.go:31] will retry after 12.004802236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.419404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.429963 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.430297 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:06.457600 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.919260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.929676 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.929775 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.155631 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.427122 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.428776 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.457310 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.922270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.926818 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.929313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.956530 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.418802 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.429772 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.430398 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.456743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.919063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.930278 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.931169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.954708 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.424687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.432292 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.435514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.460217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.923294 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.930199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.931023 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.955035 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.419846 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.426749 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.429140 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.456969 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.953436 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.956917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.957054 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.957495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.418736 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.426419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.430935 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.455617 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.927115 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.931414 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.960289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.418970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.430735 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.433659 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.456647 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.921054 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.928629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.928668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.956226 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.420386 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.427464 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.429090 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.455488 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.918328 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.927700 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.928318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.954810 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.419754 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.425924 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.427917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.455974 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.925112 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.929625 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.933370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.957078 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.428235 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.429169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.457022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.919800 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.936816 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.937017 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.957268 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.417946 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.427385 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.431794 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.456614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.919525 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.926577 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.926658 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.954174 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.421789 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.426437 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.431339 1760410 kapi.go:107] duration metric: took 43.008095172s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:10:17.457873 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.918594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.929987 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.961960 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.042188 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:18.422928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.427500 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.456271 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.919452 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.930289 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.956388 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.361633 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.319335622s)
	W1018 14:10:19.361689 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.361728 1760410 retry.go:31] will retry after 15.164014777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.422771 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.438239 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.456621 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.921757 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.928298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.420260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.427508 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.458936 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.927378 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.955188 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.420104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.426947 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.524486 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.918327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.927194 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.955524 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.423531 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.426633 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.454711 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.921113 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.928945 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.954404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.420637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.430677 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.459231 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.919372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.928323 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.958731 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.420036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.427298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.456668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.919003 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.927657 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.957888 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.421338 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.427501 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.455612 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.918199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.927869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.958203 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.419024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.428832 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.456514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.918247 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.928171 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.956494 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.418446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.430922 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.460225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.934863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.935267 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.956304 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.418276 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.426282 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.455657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.921058 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.928216 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.957699 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.423964 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.429784 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:29.459912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.919968 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.926486 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.021594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.431798 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.435432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.456454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.930069 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.943105 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.955957 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.429432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.438231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.455431 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.921095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.931309 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.956251 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.420152 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.428240 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.458714 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.922542 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.930043 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.957260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.419500 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.428933 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.455363 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.923146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.929585 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.958835 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.420137 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.426760 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.457114 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.526904 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:34.919159 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.928439 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.955153 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.418928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.426233 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.458485 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.764870 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237905947s)
	W1018 14:10:35.764934 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.764957 1760410 retry.go:31] will retry after 14.798475806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.919540 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.928534 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.955008 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.450125 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.453729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.536855 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.917765 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.925569 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.955287 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.419773 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.427166 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:37.456318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.919552 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.927629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.025256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.424973 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.428550 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.453898 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.919099 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.926293 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.955682 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.418953 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.430007 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.459225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.920652 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.929231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.954710 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.421937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.429412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.480118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.920635 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.929091 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.956998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.426085 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.427988 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.459105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.918797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.926487 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.955036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.420125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.428890 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.454689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.919029 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.927753 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.954419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.422025 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.426830 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.457376 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.917234 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.930520 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.956616 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.419241 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.428799 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.456787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.918484 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.928332 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.961125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.421688 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.427032 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.457168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.919022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.927029 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.959091 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.418637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.429220 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.455413 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.919149 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.926519 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.956560 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.419157 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.427737 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.455569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.918673 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.926052 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.420322 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.430745 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.456105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.922457 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.928328 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.956428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.434222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.437527 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.461279 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.920966 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.929362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.956797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.418327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.430238 1760410 kapi.go:107] duration metric: took 1m16.007712358s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:10:50.456335 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.564457 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:50.917217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.958103 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.421689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.455392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.920286 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.942284 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.377769111s)
	W1018 14:10:51.942338 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:51.942424 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942439 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.942850 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.942873 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:10:51.942875 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:10:51.942891 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942902 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.943167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.943186 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:10:51.943290 1760410 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:10:51.956095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.418797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.455097 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.918142 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.955842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.417788 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.454466 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.928372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.956892 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.421372 1760410 kapi.go:107] duration metric: took 1m15.507497357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:10:54.422977 1760410 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-891059 cluster.
	I1018 14:10:54.424170 1760410 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:10:54.425362 1760410 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:10:54.455256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.954565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.455801 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.954326 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.455155 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.954954 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.455480 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.957998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:58.454831 1760410 kapi.go:107] duration metric: took 1m22.004497442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:10:58.456573 1760410 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:10:58.457854 1760410 addons.go:514] duration metric: took 1m34.042106278s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:10:58.457949 1760410 start.go:246] waiting for cluster config update ...
	I1018 14:10:58.457975 1760410 start.go:255] writing updated cluster config ...
	I1018 14:10:58.458280 1760410 ssh_runner.go:195] Run: rm -f paused
	I1018 14:10:58.466229 1760410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:10:58.470432 1760410 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.477134 1760410 pod_ready.go:94] pod "coredns-66bc5c9577-9t6mk" is "Ready"
	I1018 14:10:58.477163 1760410 pod_ready.go:86] duration metric: took 6.703976ms for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.479169 1760410 pod_ready.go:83] waiting for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.489364 1760410 pod_ready.go:94] pod "etcd-addons-891059" is "Ready"
	I1018 14:10:58.489404 1760410 pod_ready.go:86] duration metric: took 10.207192ms for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.491622 1760410 pod_ready.go:83] waiting for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.497381 1760410 pod_ready.go:94] pod "kube-apiserver-addons-891059" is "Ready"
	I1018 14:10:58.497406 1760410 pod_ready.go:86] duration metric: took 5.754148ms for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.499963 1760410 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.870880 1760410 pod_ready.go:94] pod "kube-controller-manager-addons-891059" is "Ready"
	I1018 14:10:58.870932 1760410 pod_ready.go:86] duration metric: took 370.945889ms for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.070811 1760410 pod_ready.go:83] waiting for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.471322 1760410 pod_ready.go:94] pod "kube-proxy-ckpzl" is "Ready"
	I1018 14:10:59.471383 1760410 pod_ready.go:86] duration metric: took 400.536721ms for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.672128 1760410 pod_ready.go:83] waiting for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071253 1760410 pod_ready.go:94] pod "kube-scheduler-addons-891059" is "Ready"
	I1018 14:11:00.071288 1760410 pod_ready.go:86] duration metric: took 399.125586ms for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071306 1760410 pod_ready.go:40] duration metric: took 1.60503304s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:11:00.118648 1760410 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:11:00.120494 1760410 out.go:179] * Done! kubectl is now configured to use "addons-891059" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.452844913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9437d37f-0b0f-413b-9f11-76667050496e name=/runtime.v1.RuntimeService/Version
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.454153939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=548ed4d4-adf2-4898-8ead-0296f5e28f09 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.455334565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797048455309783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=548ed4d4-adf2-4898-8ead-0296f5e28f09 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.456331696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=798985cc-983b-4298-81b3-c648485210cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.456393530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=798985cc-983b-4298-81b3-c648485210cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.457029339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a
4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Atte
mpt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSan
dboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=798985cc-983b-4298-81b3-c648485210cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.476891863Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=63bba70e-74d3-4163-afcf-2168aeafb133 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.477429087Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:36a386db9a11023440d7488d3faa409d9d21b5fcdf11924a08463ea44dd49593,Metadata:&PodSandboxMetadata{Name:nginx,Uid:3922f28b-1c3b-4a38-b461-c5f57823b438,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796697443273172,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3922f28b-1c3b-4a38-b461-c5f57823b438,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:11:37.124477512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7c4673213ca0f4a45f4d9bc223588e57eb596c978d6c6d5791db3392ec7e625,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:95d229e3-8666-49b8-b2d2-2e34ed8f3aab,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796687384289463,Labels:map[string]string{app: task-pv
-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d229e3-8666-49b8-b2d2-2e34ed8f3aab,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:11:27.060104843Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:818688915cb91ca58430f6738f1eefc109893abeeab40864596fbcf61e067383,Metadata:&PodSandboxMetadata{Name:test-local-path,Uid:d6bcb3d3-06c5-4ec8-8496-cf302660e01d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796683670011917,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6bcb3d3-06c5-4ec8-8496-cf302660e01d,run: test-local-path,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"test-local-path\"},\"name\":\"test-local-path\",\"namespace\":\"
default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"echo 'local-path-provisioner' \\u003e /test/file1\"],\"image\":\"busybox:stable\",\"name\":\"busybox\",\"volumeMounts\":[{\"mountPath\":\"/test\",\"name\":\"data\"}]}],\"restartPolicy\":\"OnFailure\",\"volumes\":[{\"name\":\"data\",\"persistentVolumeClaim\":{\"claimName\":\"test-pvc\"}}]}}\n,kubernetes.io/config.seen: 2025-10-18T14:11:23.048219396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:75ccff45-9202-4152-b90e-8a5a6d306c7d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796661100376550,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:11:00.780042067Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-bphwz,Uid:5355fea1-7cc1-4587-853e-61aaaa6f569e,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796638265262385,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:34.045816466Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:4efc9
65f-2bb9-4589-8896-270849ff244b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796577721901564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:36.441308788Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-65z6z,Uid:ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796577691248960,Labels
:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:36.200231238Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:66fa96af-5215-410d-899c-8ee3de6c2691,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796576743510703,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.
io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:36.060684759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-bzhfk,Uid:f3e3fb2c-05b7-448d-bca6-3438d70868b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796575633499514,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:35.144501167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-b9tnq,Uid:a028a732-94f8-46f5-8ade-adc72e44a92d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796575570509292,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:35.078158458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&PodSandbox
Metadata{Name:ingress-nginx-admission-patch-lz2l5,Uid:edbb1e3e-09f2-4958-b943-de86e541c2ab,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760796575059184114,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 7bbede83-bf52-435b-ab08-95fe87978678,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 7bbede83-bf52-435b-ab08-95fe87978678,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:34.279766095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&PodSandboxMetadata{Name:ingress-ngin
x-admission-create-nbrm2,Uid:e48f1e46-67fb-4c71-bc01-b2f3743345f0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760796574970423092,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 62c43cac-893a-4e5e-b80a-5bbc5476490c,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 62c43cac-893a-4e5e-b80a-5bbc5476490c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:34.199828892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&PodSandboxMetadata{Name:gadget-bz8k2,Uid:32f0a88f-aea2-462
1-a5b1-df5a3fb86a2b,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796574212990572,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-10-18T14:09:33.500047595Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796574108292591,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner
,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-18T14:09:31.733897050Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796571015091426,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\
"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-10-18T14:09:30.459019619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-c5cbb,Uid:64430541-160f-413b-b21e-6636047a8859,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796567466950296,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:27.082796894Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-9t6mk,Uid:d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796567080516128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:24.892203147Z,kubernetes.io/con
fig.source: api,},RuntimeHandler:,},&PodSandbox{Id:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&PodSandboxMetadata{Name:kube-proxy-ckpzl,Uid:a3ac992c-4401-40f5-93dd-7a525ec3b2a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796566940271543,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T14:09:24.813968220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-891059,Uid:5086595138b36f6eb8ac54e83c6bc182,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796553342754492,Labels:map[string]string{component: kube
-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5086595138b36f6eb8ac54e83c6bc182,kubernetes.io/config.seen: 2025-10-18T14:09:11.958973806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-891059,Uid:97082571db3e60e44c3d60e99a384436,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796553338189523,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address
.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: 97082571db3e60e44c3d60e99a384436,kubernetes.io/config.seen: 2025-10-18T14:09:11.958971359Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-891059,Uid:1348b107c675acfd26c3d687c91d60c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796553336816640,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1348b107c675acfd26c3d687c91d60c5,kubernetes.io/config.seen: 2025-10-18T14:09:11.958972590Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d6
8599a772,Metadata:&PodSandboxMetadata{Name:etcd-addons-891059,Uid:f4360d09804819a4ab0d1ffed7423947,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760796553323483448,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: f4360d09804819a4ab0d1ffed7423947,kubernetes.io/config.seen: 2025-10-18T14:09:11.958967648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=63bba70e-74d3-4163-afcf-2168aeafb133 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.486350634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c6293b0-10fe-416d-a861-4a1a54c99666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.486451096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c6293b0-10fe-416d-a861-4a1a54c99666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.491101952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a
4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Atte
mpt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSan
dboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c6293b0-10fe-416d-a861-4a1a54c99666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.503205411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=178162d2-6b6c-414f-a2d7-84391b7d5069 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.503298133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=178162d2-6b6c-414f-a2d7-84391b7d5069 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.505646791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1f95a48-277f-460c-a9eb-78139a2ecacc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.506848949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797048506816275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1f95a48-277f-460c-a9eb-78139a2ecacc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.508690165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a61bb061-e9dd-475d-af61-bcab9b42aa80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.508820113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a61bb061-e9dd-475d-af61-bcab9b42aa80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.509750120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a
4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Atte
mpt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSan
dboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a61bb061-e9dd-475d-af61-bcab9b42aa80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.554628691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6c27e60-a030-4d77-ab52-3c1a152597ad name=/runtime.v1.RuntimeService/Version
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.554723764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6c27e60-a030-4d77-ab52-3c1a152597ad name=/runtime.v1.RuntimeService/Version
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.556820240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78d71384-7c83-424b-8d2a-c782ab86c870 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.559522969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760797048559493988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78d71384-7c83-424b-8d2a-c782ab86c870 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.560382019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c20bcbca-4ba0-4558-9940-b4867326acf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.560442758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c20bcbca-4ba0-4558-9940-b4867326acf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:17:28 addons-891059 crio[822]: time="2025-10-18 14:17:28.561843957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSa
ndboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a
4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandboxId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Atte
mpt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSan
dboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c20bcbca-4ba0-4558-9940-b4867326acf1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a4019b2f5a82e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   871fa03a65061       busybox
	2d5e462bcd2b5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	e429add87fb79       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	0c154e6ad0036       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	34e42c0ad16a7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	90ce2976bee33       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             6 minutes ago       Running             controller                               0                   2f9eb14649244       ingress-nginx-controller-675c5ddd98-bphwz
	9830a2003573c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	8b41579872800       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   d23e703cbfeb7       csi-hostpath-resizer-0
	e6b6304f138a1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	3781d3641f70c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   2d23bcaba0416       snapshot-controller-7d9fbc56b8-bzhfk
	9bb6d569a2a3f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   7a44187aa2259       snapshot-controller-7d9fbc56b8-b9tnq
	a6267021fe474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   6 minutes ago       Exited              patch                                    0                   7483a2b2bce44       ingress-nginx-admission-patch-lz2l5
	c8e7865273085       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   19bb29e5d6915       csi-hostpath-attacher-0
	405281ec9edfa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   6 minutes ago       Exited              create                                   0                   784fb9851d0e3       ingress-nginx-admission-create-nbrm2
	751b2df6a5bf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            7 minutes ago       Running             gadget                                   0                   e7adc46dd97a6       gadget-bz8k2
	3faa5d947b9ed       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   04626452678ec       kube-ingress-dns-minikube
	da75007bac0f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   bf130a85fe68d       storage-provisioner
	90350cf8ae050       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   b439dd6e51abd       amd-gpu-device-plugin-c5cbb
	5b099b5b37807       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago       Running             coredns                                  0                   ba30da275bea1       coredns-66bc5c9577-9t6mk
	97e1670c81585       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   8fb6c60415fda       kube-proxy-ckpzl
	873a633e0ebfd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   4b35987ede042       kube-controller-manager-addons-891059
	4f010fdc156cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   bfa6fdc1baf4d       etcd-addons-891059
	50cc3d2477595       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   b783fc0f686a0       kube-scheduler-addons-891059
	550e8ca214589       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   c8fbc229d4f5f       kube-apiserver-addons-891059
	
	
	==> coredns [5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925] <==
	[INFO] 10.244.0.8:38553 - 35504 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000072442s
	[INFO] 10.244.0.8:41254 - 10457 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126469s
	[INFO] 10.244.0.8:41254 - 10148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000351753s
	[INFO] 10.244.0.8:58812 - 14712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165201s
	[INFO] 10.244.0.8:58812 - 14408 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000227737s
	[INFO] 10.244.0.8:46072 - 17563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089989s
	[INFO] 10.244.0.8:46072 - 17331 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000357865s
	[INFO] 10.244.0.8:44214 - 24523 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103993s
	[INFO] 10.244.0.8:44214 - 24308 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319225s
	[INFO] 10.244.0.23:53101 - 38230 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000789741s
	[INFO] 10.244.0.23:39743 - 4637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014608s
	[INFO] 10.244.0.23:34680 - 45484 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000257617s
	[INFO] 10.244.0.23:57667 - 2834 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156321s
	[INFO] 10.244.0.23:49060 - 9734 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000228026s
	[INFO] 10.244.0.23:49380 - 40146 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011544s
	[INFO] 10.244.0.23:59610 - 60837 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001192659s
	[INFO] 10.244.0.23:43936 - 55741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001950004s
	[INFO] 10.244.0.28:45423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000594412s
	[INFO] 10.244.0.28:35326 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000279094s
	[INFO] 10.244.0.28:34121 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115216s
	[INFO] 10.244.0.28:43026 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000225891s
	[INFO] 10.244.0.28:58520 - 6 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000121233s
	[INFO] 10.244.0.28:39709 - 7 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000126579s
	[INFO] 10.244.0.28:46571 - 8 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000295561s
	[INFO] 10.244.0.28:34480 - 9 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104287s
	
	
	==> describe nodes <==
	Name:               addons-891059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-891059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-891059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-891059
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-891059"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:09:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-891059
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:17:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:17:20 +0000   Sat, 18 Oct 2025 14:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-891059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 372d92314fa4448095fc5052e6676096
	  System UUID:                372d9231-4fa4-4480-95fc-5052e6676096
	  Boot ID:                    7e38709f-8590-4225-8b4d-3bbac20f6c51
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  gadget                      gadget-bz8k2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bphwz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m55s
	  kube-system                 amd-gpu-device-plugin-c5cbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 coredns-66bc5c9577-9t6mk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m4s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 csi-hostpathplugin-65z6z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 etcd-addons-891059                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m9s
	  kube-system                 kube-apiserver-addons-891059                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-controller-manager-addons-891059        200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-proxy-ckpzl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-scheduler-addons-891059                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-b9tnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-bzhfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m59s  kube-proxy       
	  Normal  Starting                 8m9s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m9s   kubelet          Node addons-891059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m9s   kubelet          Node addons-891059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m9s   kubelet          Node addons-891059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m8s   kubelet          Node addons-891059 status is now: NodeReady
	  Normal  RegisteredNode           8m5s   node-controller  Node addons-891059 event: Registered Node addons-891059 in Controller
	
	
	==> dmesg <==
	[  +0.252518] kauditd_printk_skb: 227 callbacks suppressed
	[  +0.620971] kauditd_printk_skb: 414 callbacks suppressed
	[ +15.304937] kauditd_printk_skb: 49 callbacks suppressed
	[Oct18 14:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.485780] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.577564] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.762881] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.526985] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.667244] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.038951] kauditd_printk_skb: 160 callbacks suppressed
	[  +5.632898] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.124721] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:11] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.104883] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000298] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.819366] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.221421] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.837047] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.423844] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:13] kauditd_printk_skb: 25 callbacks suppressed
	[Oct18 14:14] kauditd_printk_skb: 17 callbacks suppressed
	[ +31.538641] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552] <==
	{"level":"info","ts":"2025-10-18T14:10:27.789790Z","caller":"traceutil/trace.go:172","msg":"trace[1019503945] transaction","detail":"{read_only:false; response_revision:980; number_of_response:1; }","duration":"291.472583ms","start":"2025-10-18T14:10:27.498307Z","end":"2025-10-18T14:10:27.789779Z","steps":["trace[1019503945] 'process raft request'  (duration: 291.315936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.789826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.361325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.789858Z","caller":"traceutil/trace.go:172","msg":"trace[1466024528] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:979; }","duration":"117.444796ms","start":"2025-10-18T14:10:27.672405Z","end":"2025-10-18T14:10:27.789850Z","steps":["trace[1466024528] 'agreement among raft nodes before linearized reading'  (duration: 117.307687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.790385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.236345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.790510Z","caller":"traceutil/trace.go:172","msg":"trace[732980754] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:980; }","duration":"108.373321ms","start":"2025-10-18T14:10:27.682130Z","end":"2025-10-18T14:10:27.790503Z","steps":["trace[732980754] 'agreement among raft nodes before linearized reading'  (duration: 108.1351ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:31.360128Z","caller":"traceutil/trace.go:172","msg":"trace[1845619058] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"140.456007ms","start":"2025-10-18T14:10:31.219657Z","end":"2025-10-18T14:10:31.360113Z","steps":["trace[1845619058] 'process raft request'  (duration: 140.331758ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:46.208681Z","caller":"traceutil/trace.go:172","msg":"trace[1766959808] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"186.674963ms","start":"2025-10-18T14:10:46.021984Z","end":"2025-10-18T14:10:46.208659Z","steps":["trace[1766959808] 'process raft request'  (duration: 186.50291ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:11:02.952579Z","caller":"traceutil/trace.go:172","msg":"trace[1731516554] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"113.28639ms","start":"2025-10-18T14:11:02.839276Z","end":"2025-10-18T14:11:02.952562Z","steps":["trace[1731516554] 'read index received'  (duration: 113.240159ms)","trace[1731516554] 'applied index is now lower than readState.Index'  (duration: 45.276µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:11:02.953674Z","caller":"traceutil/trace.go:172","msg":"trace[374499777] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"131.03911ms","start":"2025-10-18T14:11:02.822625Z","end":"2025-10-18T14:11:02.953664Z","steps":["trace[374499777] 'process raft request'  (duration: 130.864849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:11:02.953956Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.682576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:11:02.958891Z","caller":"traceutil/trace.go:172","msg":"trace[2098939205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1198; }","duration":"119.626167ms","start":"2025-10-18T14:11:02.839251Z","end":"2025-10-18T14:11:02.958878Z","steps":["trace[2098939205] 'agreement among raft nodes before linearized reading'  (duration: 114.665108ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.804829Z","caller":"traceutil/trace.go:172","msg":"trace[38135400] linearizableReadLoop","detail":"{readStateIndex:1845; appliedIndex:1845; }","duration":"254.786987ms","start":"2025-10-18T14:14:17.550008Z","end":"2025-10-18T14:14:17.804795Z","steps":["trace[38135400] 'read index received'  (duration: 254.774829ms)","trace[38135400] 'applied index is now lower than readState.Index'  (duration: 11.099µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:17.805068Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.018833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805091Z","caller":"traceutil/trace.go:172","msg":"trace[1453244013] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1761; }","duration":"255.081798ms","start":"2025-10-18T14:14:17.550004Z","end":"2025-10-18T14:14:17.805086Z","steps":["trace[1453244013] 'agreement among raft nodes before linearized reading'  (duration: 254.990525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:17.805508Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.4057ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805595Z","caller":"traceutil/trace.go:172","msg":"trace[926038607] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1762; }","duration":"133.500196ms","start":"2025-10-18T14:14:17.672087Z","end":"2025-10-18T14:14:17.805587Z","steps":["trace[926038607] 'agreement among raft nodes before linearized reading'  (duration: 133.363964ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.805922Z","caller":"traceutil/trace.go:172","msg":"trace[451226295] transaction","detail":"{read_only:false; response_revision:1762; number_of_response:1; }","duration":"260.563702ms","start":"2025-10-18T14:14:17.545349Z","end":"2025-10-18T14:14:17.805913Z","steps":["trace[451226295] 'process raft request'  (duration: 259.940194ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:23.347090Z","caller":"traceutil/trace.go:172","msg":"trace[355090838] linearizableReadLoop","detail":"{readStateIndex:1864; appliedIndex:1864; }","duration":"301.568388ms","start":"2025-10-18T14:14:23.045504Z","end":"2025-10-18T14:14:23.347073Z","steps":["trace[355090838] 'read index received'  (duration: 301.562884ms)","trace[355090838] 'applied index is now lower than readState.Index'  (duration: 4.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:23.347216Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.743884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347238Z","caller":"traceutil/trace.go:172","msg":"trace[954386242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1779; }","duration":"301.780363ms","start":"2025-10-18T14:14:23.045451Z","end":"2025-10-18T14:14:23.347231Z","steps":["trace[954386242] 'agreement among raft nodes before linearized reading'  (duration: 301.721286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347296Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.045431Z","time spent":"301.853987ms","remote":"127.0.0.1:53840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-18T14:14:23.347302Z","caller":"traceutil/trace.go:172","msg":"trace[648344144] transaction","detail":"{read_only:false; response_revision:1780; number_of_response:1; }","duration":"307.588862ms","start":"2025-10-18T14:14:23.039701Z","end":"2025-10-18T14:14:23.347290Z","steps":["trace[648344144] 'process raft request'  (duration: 307.402517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347441Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.039679Z","time spent":"307.656367ms","remote":"127.0.0.1:53970","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" mod_revision:1752 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" > >"}
	{"level":"warn","ts":"2025-10-18T14:14:23.347489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.844351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347507Z","caller":"traceutil/trace.go:172","msg":"trace[2122422757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1780; }","duration":"166.862778ms","start":"2025-10-18T14:14:23.180639Z","end":"2025-10-18T14:14:23.347502Z","steps":["trace[2122422757] 'agreement among raft nodes before linearized reading'  (duration: 166.829225ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:17:28 up 8 min,  0 users,  load average: 1.21, 1.42, 0.93
	Linux addons-891059 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab] <==
	W1018 14:09:53.453446       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 14:09:53.493977       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:09:53.500603       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:10:34.174347       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.174816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 14:10:34.174931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 14:10:34.177190       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.177355       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:34.177368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 14:10:41.344292       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.140.151:443: connect: connection refused" logger="UnhandledError"
	W1018 14:10:41.345235       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:41.349441       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:41.403792       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:11:09.006479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51796: use of closed network connection
	E1018 14:11:09.215206       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51814: use of closed network connection
	I1018 14:11:36.964050       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:11:37.174177       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.128.177"}
	I1018 14:11:42.373806       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 14:12:52.429043       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.125.191"}
	
	
	==> kube-controller-manager [873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad] <==
	I1018 14:09:23.462051       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:09:23.462733       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 14:09:23.462816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 14:09:23.464420       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:09:23.465969       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:09:23.466053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.467317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:23.471785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:09:23.473104       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:09:23.507962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.507980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:09:23.507988       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1018 14:09:32.271939       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:09:53.430333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:09:53.430686       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:09:53.430794       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:09:53.479595       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:09:53.486163       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:09:53.531732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:53.587475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:10:23.541245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:10:23.598329       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:11:22.617268       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1018 14:12:49.739835       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1018 14:14:35.381048       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881] <==
	I1018 14:09:29.078784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:09:29.179875       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:09:29.180064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1018 14:09:29.180168       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:09:29.435752       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:09:29.435855       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:09:29.435886       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:09:29.458405       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:09:29.459486       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:09:29.459499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:09:29.471972       1 config.go:200] "Starting service config controller"
	I1018 14:09:29.472688       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:09:29.472718       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:09:29.472724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:09:29.472739       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:09:29.472745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:09:29.474046       1 config.go:309] "Starting node config controller"
	I1018 14:09:29.474055       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:09:29.474060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:09:29.573160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:09:29.573457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:09:29.573493       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6] <==
	E1018 14:09:16.517030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:16.517067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:09:16.517111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:16.517151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:09:16.517190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:16.517227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:16.517305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:16.517334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:16.517377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:16.517437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:16.524951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:09:17.315107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:09:17.350735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:09:17.351152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:17.351207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:17.375382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:17.392110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:17.451119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:17.490015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:17.582674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:09:17.653362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:09:17.692474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:17.761718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:17.762010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 14:09:18.995741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:16:34 addons-891059 kubelet[1503]: E1018 14:16:34.913237    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:16:34 addons-891059 kubelet[1503]: E1018 14:16:34.913292    1503 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:16:34 addons-891059 kubelet[1503]: E1018 14:16:34.913660    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(95d229e3-8666-49b8-b2d2-2e34ed8f3aab): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:16:34 addons-891059 kubelet[1503]: E1018 14:16:34.913700    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:16:40 addons-891059 kubelet[1503]: E1018 14:16:40.037148    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797000036454834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:16:40 addons-891059 kubelet[1503]: E1018 14:16:40.037301    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797000036454834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:16:42 addons-891059 kubelet[1503]: E1018 14:16:42.475848    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:16:48 addons-891059 kubelet[1503]: E1018 14:16:48.472951    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:16:50 addons-891059 kubelet[1503]: E1018 14:16:50.042067    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797010040897119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:16:50 addons-891059 kubelet[1503]: E1018 14:16:50.042113    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797010040897119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:16:59 addons-891059 kubelet[1503]: E1018 14:16:59.481167    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:17:00 addons-891059 kubelet[1503]: E1018 14:17:00.046403    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797020045969539  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:00 addons-891059 kubelet[1503]: E1018 14:17:00.046426    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797020045969539  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:05 addons-891059 kubelet[1503]: E1018 14:17:05.577040    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:17:05 addons-891059 kubelet[1503]: E1018 14:17:05.577097    1503 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:17:05 addons-891059 kubelet[1503]: E1018 14:17:05.577321    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(3922f28b-1c3b-4a38-b461-c5f57823b438): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:17:05 addons-891059 kubelet[1503]: E1018 14:17:05.577373    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:17:10 addons-891059 kubelet[1503]: E1018 14:17:10.049889    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797030049332450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:10 addons-891059 kubelet[1503]: E1018 14:17:10.049922    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797030049332450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:13 addons-891059 kubelet[1503]: E1018 14:17:13.475091    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="95d229e3-8666-49b8-b2d2-2e34ed8f3aab"
	Oct 18 14:17:16 addons-891059 kubelet[1503]: E1018 14:17:16.478311    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:17:20 addons-891059 kubelet[1503]: E1018 14:17:20.052722    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760797040052130785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:20 addons-891059 kubelet[1503]: E1018 14:17:20.053096    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760797040052130785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:17:25 addons-891059 kubelet[1503]: I1018 14:17:25.481745    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:27 addons-891059 kubelet[1503]: E1018 14:17:27.492338    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	
	
	==> storage-provisioner [da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504] <==
	W1018 14:17:04.775471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:06.780696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:06.787478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:08.792354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:08.799181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:10.804093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:10.811837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:12.815920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:12.821666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:14.825474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:14.831478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:16.835625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:16.842466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:18.848499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:18.855838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:20.859921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:20.866078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:22.870905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:22.883879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:24.887316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:24.895476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:26.898903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:26.904770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:28.909971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:28.922815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
helpers_test.go:269: (dbg) Run:  kubectl --context addons-891059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1 (86.033627ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrm2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrm2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m52s                 default-scheduler  Successfully assigned default/nginx to addons-891059
	  Normal   Pulling    108s (x3 over 5m52s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     24s (x3 over 4m2s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     24s (x3 over 4m2s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x4 over 4m1s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x4 over 4m1s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48qc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-48qc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-891059
	  Warning  Failed     55s (x3 over 4m33s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x3 over 4m33s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    16s (x5 over 4m33s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     16s (x5 over 4m33s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x4 over 6m2s)    kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:23 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2cp2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2cp2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m6s                 default-scheduler  Successfully assigned default/test-local-path to addons-891059
	  Warning  Failed     5m4s                 kubelet            Failed to pull image "busybox:stable": initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s (x3 over 5m4s)   kubelet            Error: ErrImagePull
	  Warning  Failed     85s (x2 over 3m17s)  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    47s (x5 over 5m3s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     47s (x5 over 5m3s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    35s (x4 over 6m5s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nbrm2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lz2l5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.077085313s)
--- FAIL: TestAddons/parallel/CSI (380.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (231.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-891059 apply -f testdata/storage-provisioner-rancher/pvc.yaml
I1018 14:11:17.720145 1759792 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:955: (dbg) Run:  kubectl --context addons-891059 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891059 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d6bcb3d3-06c5-4ec8-8496-cf302660e01d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-10-18 14:14:23.400728682 +0000 UTC m=+357.136085716
addons_test.go:962: (dbg) Run:  kubectl --context addons-891059 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-891059 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-891059/192.168.39.100
Start Time:       Sat, 18 Oct 2025 14:11:23 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2cp2j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-2cp2j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/test-local-path to addons-891059
Warning  Failed     118s                  kubelet            Failed to pull image "busybox:stable": initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    117s                  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     117s                  kubelet            Error: ImagePullBackOff
Normal   Pulling    103s (x2 over 2m59s)  kubelet            Pulling image "busybox:stable"
Warning  Failed     11s (x2 over 118s)    kubelet            Error: ErrImagePull
Warning  Failed     11s                   kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
addons_test.go:962: (dbg) Run:  kubectl --context addons-891059 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-891059 logs test-local-path -n default: exit status 1 (82.105012ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-891059 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-891059 -n addons-891059
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 logs -n 25: (1.729904183s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-398489                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ --download-only -p binary-mirror-305392 --alsologtostderr --binary-mirror http://127.0.0.1:39643 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ -p binary-mirror-305392                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-305392 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ addons  │ enable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ start   │ -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ addons  │ addons-891059 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:11 UTC │ 18 Oct 25 14:11 UTC │
	│ ip      │ addons-891059 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-891059                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ addons-891059 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	│ addons  │ enable headlamp -p addons-891059 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-891059        │ jenkins │ v1.37.0 │ 18 Oct 25 14:12 UTC │ 18 Oct 25 14:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:38.383524 1760410 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:38.383797 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383806 1760410 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:38.383810 1760410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:38.383984 1760410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:08:38.384564 1760410 out.go:368] Setting JSON to false
	I1018 14:08:38.385550 1760410 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21066,"bootTime":1760775452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:38.385650 1760410 start.go:141] virtualization: kvm guest
	I1018 14:08:38.387370 1760410 out.go:179] * [addons-891059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:08:38.388598 1760410 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:08:38.388649 1760410 notify.go:220] Checking for updates...
	I1018 14:08:38.390750 1760410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:38.391832 1760410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:38.392857 1760410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:38.393954 1760410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:08:38.395387 1760410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:08:38.397030 1760410 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:08:38.428089 1760410 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 14:08:38.429204 1760410 start.go:305] selected driver: kvm2
	I1018 14:08:38.429233 1760410 start.go:925] validating driver "kvm2" against <nil>
	I1018 14:08:38.429248 1760410 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:08:38.429988 1760410 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.430081 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.444435 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.444496 1760410 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:38.459956 1760410 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:38.460007 1760410 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:08:38.460292 1760410 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:08:38.460324 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:08:38.460395 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:08:38.460407 1760410 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 14:08:38.460458 1760410 start.go:349] cluster config:
	{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 14:08:38.460561 1760410 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:38.462275 1760410 out.go:179] * Starting "addons-891059" primary control-plane node in "addons-891059" cluster
	I1018 14:08:38.463616 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:08:38.463663 1760410 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:08:38.463679 1760410 cache.go:58] Caching tarball of preloaded images
	I1018 14:08:38.463782 1760410 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:08:38.463797 1760410 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:08:38.464313 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:38.464364 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json: {Name:mk7320464dda7a1239a5641208a2baa2eb0aeb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:08:38.464529 1760410 start.go:360] acquireMachinesLock for addons-891059: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 14:08:38.464580 1760410 start.go:364] duration metric: took 35.666µs to acquireMachinesLock for "addons-891059"
	I1018 14:08:38.464596 1760410 start.go:93] Provisioning new machine with config: &{Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:08:38.464647 1760410 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 14:08:38.467259 1760410 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 14:08:38.467474 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:08:38.467524 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:08:38.481384 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1018 14:08:38.481876 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:08:38.482458 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:08:38.482488 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:08:38.482906 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:08:38.483171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:38.483408 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:38.483601 1760410 start.go:159] libmachine.API.Create for "addons-891059" (driver="kvm2")
	I1018 14:08:38.483638 1760410 client.go:168] LocalClient.Create starting
	I1018 14:08:38.483679 1760410 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem
	I1018 14:08:38.745193 1760410 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem
	I1018 14:08:39.239522 1760410 main.go:141] libmachine: Running pre-create checks...
	I1018 14:08:39.239552 1760410 main.go:141] libmachine: (addons-891059) Calling .PreCreateCheck
	I1018 14:08:39.240096 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:39.240581 1760410 main.go:141] libmachine: Creating machine...
	I1018 14:08:39.240598 1760410 main.go:141] libmachine: (addons-891059) Calling .Create
	I1018 14:08:39.240735 1760410 main.go:141] libmachine: (addons-891059) creating domain...
	I1018 14:08:39.240756 1760410 main.go:141] libmachine: (addons-891059) creating network...
	I1018 14:08:39.242180 1760410 main.go:141] libmachine: (addons-891059) DBG | found existing default network
	I1018 14:08:39.242394 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.242421 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>default</name>
	I1018 14:08:39.242432 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 14:08:39.242439 1760410 main.go:141] libmachine: (addons-891059) DBG |   <forward mode='nat'>
	I1018 14:08:39.242474 1760410 main.go:141] libmachine: (addons-891059) DBG |     <nat>
	I1018 14:08:39.242495 1760410 main.go:141] libmachine: (addons-891059) DBG |       <port start='1024' end='65535'/>
	I1018 14:08:39.242573 1760410 main.go:141] libmachine: (addons-891059) DBG |     </nat>
	I1018 14:08:39.242596 1760410 main.go:141] libmachine: (addons-891059) DBG |   </forward>
	I1018 14:08:39.242607 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 14:08:39.242619 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 14:08:39.242634 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 14:08:39.242645 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.242658 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 14:08:39.242666 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.242673 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.242680 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.242694 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243130 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.242976 1760437 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123570}
	I1018 14:08:39.243178 1760410 main.go:141] libmachine: (addons-891059) DBG | defining private network:
	I1018 14:08:39.243193 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.243204 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.243216 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.243222 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.243227 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.243234 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.243239 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.243245 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.243249 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.243263 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.243270 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.248946 1760410 main.go:141] libmachine: (addons-891059) DBG | creating private network mk-addons-891059 192.168.39.0/24...
	I1018 14:08:39.319941 1760410 main.go:141] libmachine: (addons-891059) DBG | private network mk-addons-891059 192.168.39.0/24 created
	I1018 14:08:39.320210 1760410 main.go:141] libmachine: (addons-891059) DBG | <network>
	I1018 14:08:39.320231 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>mk-addons-891059</name>
	I1018 14:08:39.320247 1760410 main.go:141] libmachine: (addons-891059) setting up store path in /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.320262 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>3e7dc5ca-8c6a-4f5a-8f08-752a5d85d27d</uuid>
	I1018 14:08:39.320883 1760410 main.go:141] libmachine: (addons-891059) building disk image from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 14:08:39.320919 1760410 main.go:141] libmachine: (addons-891059) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 14:08:39.320937 1760410 main.go:141] libmachine: (addons-891059) Downloading /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 14:08:39.320964 1760410 main.go:141] libmachine: (addons-891059) DBG |   <mac address='52:54:00:80:09:dc'/>
	I1018 14:08:39.320974 1760410 main.go:141] libmachine: (addons-891059) DBG |   <dns enable='no'/>
	I1018 14:08:39.320985 1760410 main.go:141] libmachine: (addons-891059) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 14:08:39.320997 1760410 main.go:141] libmachine: (addons-891059) DBG |     <dhcp>
	I1018 14:08:39.321006 1760410 main.go:141] libmachine: (addons-891059) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 14:08:39.321013 1760410 main.go:141] libmachine: (addons-891059) DBG |     </dhcp>
	I1018 14:08:39.321038 1760410 main.go:141] libmachine: (addons-891059) DBG |   </ip>
	I1018 14:08:39.321045 1760410 main.go:141] libmachine: (addons-891059) DBG | </network>
	I1018 14:08:39.321061 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:39.321072 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.320218 1760437 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.610846 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.610682 1760437 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa...
	I1018 14:08:39.691572 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691412 1760437 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk...
	I1018 14:08:39.691603 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing magic tar header
	I1018 14:08:39.691616 1760410 main.go:141] libmachine: (addons-891059) DBG | Writing SSH key tar header
	I1018 14:08:39.691625 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:39.691531 1760437 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 ...
	I1018 14:08:39.691639 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059
	I1018 14:08:39.691766 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059 (perms=drwx------)
	I1018 14:08:39.691804 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube/machines (perms=drwxr-xr-x)
	I1018 14:08:39.691812 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines
	I1018 14:08:39.691822 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:39.691828 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1755824
	I1018 14:08:39.691835 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 14:08:39.691839 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home/jenkins
	I1018 14:08:39.691848 1760410 main.go:141] libmachine: (addons-891059) DBG | checking permissions on dir: /home
	I1018 14:08:39.691853 1760410 main.go:141] libmachine: (addons-891059) DBG | skipping /home - not owner
	I1018 14:08:39.691897 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824/.minikube (perms=drwxr-xr-x)
	I1018 14:08:39.691923 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration/21409-1755824 (perms=drwxrwxr-x)
	I1018 14:08:39.691940 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 14:08:39.691998 1760410 main.go:141] libmachine: (addons-891059) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 14:08:39.692026 1760410 main.go:141] libmachine: (addons-891059) defining domain...
	I1018 14:08:39.693006 1760410 main.go:141] libmachine: (addons-891059) defining domain using XML: 
	I1018 14:08:39.693019 1760410 main.go:141] libmachine: (addons-891059) <domain type='kvm'>
	I1018 14:08:39.693025 1760410 main.go:141] libmachine: (addons-891059)   <name>addons-891059</name>
	I1018 14:08:39.693030 1760410 main.go:141] libmachine: (addons-891059)   <memory unit='MiB'>4096</memory>
	I1018 14:08:39.693036 1760410 main.go:141] libmachine: (addons-891059)   <vcpu>2</vcpu>
	I1018 14:08:39.693040 1760410 main.go:141] libmachine: (addons-891059)   <features>
	I1018 14:08:39.693046 1760410 main.go:141] libmachine: (addons-891059)     <acpi/>
	I1018 14:08:39.693053 1760410 main.go:141] libmachine: (addons-891059)     <apic/>
	I1018 14:08:39.693058 1760410 main.go:141] libmachine: (addons-891059)     <pae/>
	I1018 14:08:39.693064 1760410 main.go:141] libmachine: (addons-891059)   </features>
	I1018 14:08:39.693069 1760410 main.go:141] libmachine: (addons-891059)   <cpu mode='host-passthrough'>
	I1018 14:08:39.693074 1760410 main.go:141] libmachine: (addons-891059)   </cpu>
	I1018 14:08:39.693078 1760410 main.go:141] libmachine: (addons-891059)   <os>
	I1018 14:08:39.693085 1760410 main.go:141] libmachine: (addons-891059)     <type>hvm</type>
	I1018 14:08:39.693090 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='cdrom'/>
	I1018 14:08:39.693095 1760410 main.go:141] libmachine: (addons-891059)     <boot dev='hd'/>
	I1018 14:08:39.693100 1760410 main.go:141] libmachine: (addons-891059)     <bootmenu enable='no'/>
	I1018 14:08:39.693104 1760410 main.go:141] libmachine: (addons-891059)   </os>
	I1018 14:08:39.693134 1760410 main.go:141] libmachine: (addons-891059)   <devices>
	I1018 14:08:39.693159 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='cdrom'>
	I1018 14:08:39.693176 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.693184 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.693194 1760410 main.go:141] libmachine: (addons-891059)       <readonly/>
	I1018 14:08:39.693202 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693215 1760410 main.go:141] libmachine: (addons-891059)     <disk type='file' device='disk'>
	I1018 14:08:39.693225 1760410 main.go:141] libmachine: (addons-891059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 14:08:39.693242 1760410 main.go:141] libmachine: (addons-891059)       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.693252 1760410 main.go:141] libmachine: (addons-891059)       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.693259 1760410 main.go:141] libmachine: (addons-891059)     </disk>
	I1018 14:08:39.693271 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693281 1760410 main.go:141] libmachine: (addons-891059)       <source network='mk-addons-891059'/>
	I1018 14:08:39.693293 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693303 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693324 1760410 main.go:141] libmachine: (addons-891059)     <interface type='network'>
	I1018 14:08:39.693354 1760410 main.go:141] libmachine: (addons-891059)       <source network='default'/>
	I1018 14:08:39.693363 1760410 main.go:141] libmachine: (addons-891059)       <model type='virtio'/>
	I1018 14:08:39.693367 1760410 main.go:141] libmachine: (addons-891059)     </interface>
	I1018 14:08:39.693373 1760410 main.go:141] libmachine: (addons-891059)     <serial type='pty'>
	I1018 14:08:39.693396 1760410 main.go:141] libmachine: (addons-891059)       <target port='0'/>
	I1018 14:08:39.693404 1760410 main.go:141] libmachine: (addons-891059)     </serial>
	I1018 14:08:39.693408 1760410 main.go:141] libmachine: (addons-891059)     <console type='pty'>
	I1018 14:08:39.693416 1760410 main.go:141] libmachine: (addons-891059)       <target type='serial' port='0'/>
	I1018 14:08:39.693426 1760410 main.go:141] libmachine: (addons-891059)     </console>
	I1018 14:08:39.693446 1760410 main.go:141] libmachine: (addons-891059)     <rng model='virtio'>
	I1018 14:08:39.693467 1760410 main.go:141] libmachine: (addons-891059)       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.693482 1760410 main.go:141] libmachine: (addons-891059)     </rng>
	I1018 14:08:39.693492 1760410 main.go:141] libmachine: (addons-891059)   </devices>
	I1018 14:08:39.693501 1760410 main.go:141] libmachine: (addons-891059) </domain>
	I1018 14:08:39.693506 1760410 main.go:141] libmachine: (addons-891059) 
	I1018 14:08:39.706650 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:f4:cf:b8 in network default
	I1018 14:08:39.707254 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:39.707274 1760410 main.go:141] libmachine: (addons-891059) starting domain...
	I1018 14:08:39.707286 1760410 main.go:141] libmachine: (addons-891059) ensuring networks are active...
	I1018 14:08:39.707989 1760410 main.go:141] libmachine: (addons-891059) Ensuring network default is active
	I1018 14:08:39.708292 1760410 main.go:141] libmachine: (addons-891059) Ensuring network mk-addons-891059 is active
	I1018 14:08:39.708895 1760410 main.go:141] libmachine: (addons-891059) getting domain XML...
	I1018 14:08:39.709831 1760410 main.go:141] libmachine: (addons-891059) DBG | starting domain XML:
	I1018 14:08:39.709853 1760410 main.go:141] libmachine: (addons-891059) DBG | <domain type='kvm'>
	I1018 14:08:39.709867 1760410 main.go:141] libmachine: (addons-891059) DBG |   <name>addons-891059</name>
	I1018 14:08:39.709876 1760410 main.go:141] libmachine: (addons-891059) DBG |   <uuid>372d9231-4fa4-4480-95fc-5052e6676096</uuid>
	I1018 14:08:39.709886 1760410 main.go:141] libmachine: (addons-891059) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 14:08:39.709894 1760410 main.go:141] libmachine: (addons-891059) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 14:08:39.709903 1760410 main.go:141] libmachine: (addons-891059) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 14:08:39.709907 1760410 main.go:141] libmachine: (addons-891059) DBG |   <os>
	I1018 14:08:39.709920 1760410 main.go:141] libmachine: (addons-891059) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 14:08:39.709930 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='cdrom'/>
	I1018 14:08:39.709943 1760410 main.go:141] libmachine: (addons-891059) DBG |     <boot dev='hd'/>
	I1018 14:08:39.709954 1760410 main.go:141] libmachine: (addons-891059) DBG |     <bootmenu enable='no'/>
	I1018 14:08:39.709988 1760410 main.go:141] libmachine: (addons-891059) DBG |   </os>
	I1018 14:08:39.710010 1760410 main.go:141] libmachine: (addons-891059) DBG |   <features>
	I1018 14:08:39.710020 1760410 main.go:141] libmachine: (addons-891059) DBG |     <acpi/>
	I1018 14:08:39.710028 1760410 main.go:141] libmachine: (addons-891059) DBG |     <apic/>
	I1018 14:08:39.710042 1760410 main.go:141] libmachine: (addons-891059) DBG |     <pae/>
	I1018 14:08:39.710052 1760410 main.go:141] libmachine: (addons-891059) DBG |   </features>
	I1018 14:08:39.710065 1760410 main.go:141] libmachine: (addons-891059) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 14:08:39.710080 1760410 main.go:141] libmachine: (addons-891059) DBG |   <clock offset='utc'/>
	I1018 14:08:39.710094 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 14:08:39.710106 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_reboot>restart</on_reboot>
	I1018 14:08:39.710116 1760410 main.go:141] libmachine: (addons-891059) DBG |   <on_crash>destroy</on_crash>
	I1018 14:08:39.710124 1760410 main.go:141] libmachine: (addons-891059) DBG |   <devices>
	I1018 14:08:39.710141 1760410 main.go:141] libmachine: (addons-891059) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 14:08:39.710157 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='cdrom'>
	I1018 14:08:39.710174 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw'/>
	I1018 14:08:39.710189 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/boot2docker.iso'/>
	I1018 14:08:39.710202 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 14:08:39.710213 1760410 main.go:141] libmachine: (addons-891059) DBG |       <readonly/>
	I1018 14:08:39.710241 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 14:08:39.710261 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710268 1760410 main.go:141] libmachine: (addons-891059) DBG |     <disk type='file' device='disk'>
	I1018 14:08:39.710278 1760410 main.go:141] libmachine: (addons-891059) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 14:08:39.710289 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/addons-891059.rawdisk'/>
	I1018 14:08:39.710297 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target dev='hda' bus='virtio'/>
	I1018 14:08:39.710304 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 14:08:39.710311 1760410 main.go:141] libmachine: (addons-891059) DBG |     </disk>
	I1018 14:08:39.710317 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 14:08:39.710325 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 14:08:39.710331 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710338 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 14:08:39.710353 1760410 main.go:141] libmachine: (addons-891059) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 14:08:39.710359 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 14:08:39.710375 1760410 main.go:141] libmachine: (addons-891059) DBG |     </controller>
	I1018 14:08:39.710394 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710417 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:12:2f:9d'/>
	I1018 14:08:39.710440 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='mk-addons-891059'/>
	I1018 14:08:39.710448 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710453 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 14:08:39.710459 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710463 1760410 main.go:141] libmachine: (addons-891059) DBG |     <interface type='network'>
	I1018 14:08:39.710469 1760410 main.go:141] libmachine: (addons-891059) DBG |       <mac address='52:54:00:f4:cf:b8'/>
	I1018 14:08:39.710473 1760410 main.go:141] libmachine: (addons-891059) DBG |       <source network='default'/>
	I1018 14:08:39.710478 1760410 main.go:141] libmachine: (addons-891059) DBG |       <model type='virtio'/>
	I1018 14:08:39.710499 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 14:08:39.710511 1760410 main.go:141] libmachine: (addons-891059) DBG |     </interface>
	I1018 14:08:39.710529 1760410 main.go:141] libmachine: (addons-891059) DBG |     <serial type='pty'>
	I1018 14:08:39.710546 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='isa-serial' port='0'>
	I1018 14:08:39.710558 1760410 main.go:141] libmachine: (addons-891059) DBG |         <model name='isa-serial'/>
	I1018 14:08:39.710568 1760410 main.go:141] libmachine: (addons-891059) DBG |       </target>
	I1018 14:08:39.710575 1760410 main.go:141] libmachine: (addons-891059) DBG |     </serial>
	I1018 14:08:39.710584 1760410 main.go:141] libmachine: (addons-891059) DBG |     <console type='pty'>
	I1018 14:08:39.710590 1760410 main.go:141] libmachine: (addons-891059) DBG |       <target type='serial' port='0'/>
	I1018 14:08:39.710597 1760410 main.go:141] libmachine: (addons-891059) DBG |     </console>
	I1018 14:08:39.710602 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='mouse' bus='ps2'/>
	I1018 14:08:39.710611 1760410 main.go:141] libmachine: (addons-891059) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 14:08:39.710619 1760410 main.go:141] libmachine: (addons-891059) DBG |     <audio id='1' type='none'/>
	I1018 14:08:39.710635 1760410 main.go:141] libmachine: (addons-891059) DBG |     <memballoon model='virtio'>
	I1018 14:08:39.710650 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 14:08:39.710670 1760410 main.go:141] libmachine: (addons-891059) DBG |     </memballoon>
	I1018 14:08:39.710681 1760410 main.go:141] libmachine: (addons-891059) DBG |     <rng model='virtio'>
	I1018 14:08:39.710688 1760410 main.go:141] libmachine: (addons-891059) DBG |       <backend model='random'>/dev/random</backend>
	I1018 14:08:39.710700 1760410 main.go:141] libmachine: (addons-891059) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 14:08:39.710714 1760410 main.go:141] libmachine: (addons-891059) DBG |     </rng>
	I1018 14:08:39.710725 1760410 main.go:141] libmachine: (addons-891059) DBG |   </devices>
	I1018 14:08:39.710731 1760410 main.go:141] libmachine: (addons-891059) DBG | </domain>
	I1018 14:08:39.710744 1760410 main.go:141] libmachine: (addons-891059) DBG | 
	I1018 14:08:41.127813 1760410 main.go:141] libmachine: (addons-891059) waiting for domain to start...
	I1018 14:08:41.129181 1760410 main.go:141] libmachine: (addons-891059) domain is now running
	I1018 14:08:41.129199 1760410 main.go:141] libmachine: (addons-891059) waiting for IP...
	I1018 14:08:41.130215 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.130734 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.130765 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.131111 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.131182 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.131117 1760437 retry.go:31] will retry after 310.436274ms: waiting for domain to come up
	I1018 14:08:41.443955 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.444643 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.444667 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.444959 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.445013 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.444938 1760437 retry.go:31] will retry after 310.095624ms: waiting for domain to come up
	I1018 14:08:41.756412 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:41.756912 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:41.756985 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:41.757237 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:41.757264 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:41.757211 1760437 retry.go:31] will retry after 403.034899ms: waiting for domain to come up
	I1018 14:08:42.161632 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.162259 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.162290 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.162631 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.162653 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.162588 1760437 retry.go:31] will retry after 392.033324ms: waiting for domain to come up
	I1018 14:08:42.555954 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:42.556467 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:42.556490 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:42.556794 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:42.556833 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:42.556772 1760437 retry.go:31] will retry after 563.122226ms: waiting for domain to come up
	I1018 14:08:43.121698 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.122213 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.122240 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.122649 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.122673 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.122588 1760437 retry.go:31] will retry after 654.00858ms: waiting for domain to come up
	I1018 14:08:43.778430 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:43.778988 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:43.779017 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:43.779284 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:43.779359 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:43.779296 1760437 retry.go:31] will retry after 861.369309ms: waiting for domain to come up
	I1018 14:08:44.642386 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:44.642972 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:44.643001 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:44.643258 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:44.643325 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:44.643266 1760437 retry.go:31] will retry after 1.120629341s: waiting for domain to come up
	I1018 14:08:45.765704 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:45.766202 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:45.766225 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:45.766596 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:45.766622 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:45.766568 1760437 retry.go:31] will retry after 1.280814413s: waiting for domain to come up
	I1018 14:08:47.049323 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:47.049871 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:47.049898 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:47.050228 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:47.050287 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:47.050222 1760437 retry.go:31] will retry after 2.205238568s: waiting for domain to come up
	I1018 14:08:49.257773 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:49.258389 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:49.258419 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:49.258809 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:49.258836 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:49.258779 1760437 retry.go:31] will retry after 2.31868491s: waiting for domain to come up
	I1018 14:08:51.580165 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:51.580745 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:51.580775 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:51.581147 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:51.581179 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:51.581113 1760437 retry.go:31] will retry after 2.275257905s: waiting for domain to come up
	I1018 14:08:53.858516 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:53.859085 1760410 main.go:141] libmachine: (addons-891059) DBG | no network interface addresses found for domain addons-891059 (source=lease)
	I1018 14:08:53.859110 1760410 main.go:141] libmachine: (addons-891059) DBG | trying to list again with source=arp
	I1018 14:08:53.859415 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find current IP address of domain addons-891059 in network mk-addons-891059 (interfaces detected: [])
	I1018 14:08:53.859447 1760410 main.go:141] libmachine: (addons-891059) DBG | I1018 14:08:53.859390 1760437 retry.go:31] will retry after 3.968512343s: waiting for domain to come up
	I1018 14:08:57.829253 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829924 1760410 main.go:141] libmachine: (addons-891059) found domain IP: 192.168.39.100
	I1018 14:08:57.829948 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has current primary IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:57.829954 1760410 main.go:141] libmachine: (addons-891059) reserving static IP address...
	I1018 14:08:57.830357 1760410 main.go:141] libmachine: (addons-891059) DBG | unable to find host DHCP lease matching {name: "addons-891059", mac: "52:54:00:12:2f:9d", ip: "192.168.39.100"} in network mk-addons-891059
	I1018 14:08:58.036271 1760410 main.go:141] libmachine: (addons-891059) DBG | Getting to WaitForSSH function...
	I1018 14:08:58.036306 1760410 main.go:141] libmachine: (addons-891059) reserved static IP address 192.168.39.100 for domain addons-891059
	I1018 14:08:58.036334 1760410 main.go:141] libmachine: (addons-891059) waiting for SSH...
	I1018 14:08:58.039556 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040071 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.040113 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.040427 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH client type: external
	I1018 14:08:58.040457 1760410 main.go:141] libmachine: (addons-891059) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa (-rw-------)
	I1018 14:08:58.040489 1760410 main.go:141] libmachine: (addons-891059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 14:08:58.040505 1760410 main.go:141] libmachine: (addons-891059) DBG | About to run SSH command:
	I1018 14:08:58.040518 1760410 main.go:141] libmachine: (addons-891059) DBG | exit 0
	I1018 14:08:58.178221 1760410 main.go:141] libmachine: (addons-891059) DBG | SSH cmd err, output: <nil>: 
	I1018 14:08:58.178611 1760410 main.go:141] libmachine: (addons-891059) domain creation complete
	I1018 14:08:58.178979 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:58.179725 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.179914 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:58.180097 1760410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 14:08:58.180117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:08:58.181922 1760410 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 14:08:58.181937 1760410 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 14:08:58.181946 1760410 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 14:08:58.181953 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.184676 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185179 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.185207 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.185454 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.185640 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185815 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.185930 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.186116 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.186465 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.186483 1760410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 14:08:58.305360 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.305387 1760410 main.go:141] libmachine: Detecting the provisioner...
	I1018 14:08:58.305399 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.308732 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309086 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.309110 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.309407 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.309679 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.309898 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.310049 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.310245 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.310526 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.310542 1760410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 14:08:58.429225 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 14:08:58.429329 1760410 main.go:141] libmachine: found compatible host: buildroot
	I1018 14:08:58.429364 1760410 main.go:141] libmachine: Provisioning with buildroot...
	I1018 14:08:58.429383 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429696 1760410 buildroot.go:166] provisioning hostname "addons-891059"
	I1018 14:08:58.429732 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.429974 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.433221 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433619 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.433638 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.433891 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.434117 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434290 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.434435 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.434615 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.434828 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.434841 1760410 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-891059 && echo "addons-891059" | sudo tee /etc/hostname
	I1018 14:08:58.571164 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-891059
	
	I1018 14:08:58.571201 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.574587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575023 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.575060 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.575255 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:58.575484 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:58.575818 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:58.576059 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:58.576292 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:58.576310 1760410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-891059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-891059/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-891059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:08:58.705558 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:08:58.705593 1760410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 14:08:58.705650 1760410 buildroot.go:174] setting up certificates
	I1018 14:08:58.705677 1760410 provision.go:84] configureAuth start
	I1018 14:08:58.705691 1760410 main.go:141] libmachine: (addons-891059) Calling .GetMachineName
	I1018 14:08:58.706037 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:58.709084 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709428 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.709454 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.709701 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:58.712025 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712527 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:58.712572 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:58.712679 1760410 provision.go:143] copyHostCerts
	I1018 14:08:58.712765 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 14:08:58.712925 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 14:08:58.713027 1760410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 14:08:58.713099 1760410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.addons-891059 san=[127.0.0.1 192.168.39.100 addons-891059 localhost minikube]
	I1018 14:08:59.195381 1760410 provision.go:177] copyRemoteCerts
	I1018 14:08:59.195454 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:08:59.195481 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.198489 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.198846 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.198881 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.199059 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.199299 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.199483 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.199691 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.292928 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:08:59.325386 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:08:59.357335 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:08:59.389117 1760410 provision.go:87] duration metric: took 683.421516ms to configureAuth
	I1018 14:08:59.389152 1760410 buildroot.go:189] setting minikube options for container-runtime
	I1018 14:08:59.389391 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:08:59.389501 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.392319 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392710 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.392752 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.392932 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.393164 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393457 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.393687 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.393910 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.394130 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.394146 1760410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:08:59.663506 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:08:59.663540 1760410 main.go:141] libmachine: Checking connection to Docker...
	I1018 14:08:59.663551 1760410 main.go:141] libmachine: (addons-891059) Calling .GetURL
	I1018 14:08:59.665074 1760410 main.go:141] libmachine: (addons-891059) DBG | using libvirt version 8000000
	I1018 14:08:59.668182 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668663 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.668695 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.668860 1760410 main.go:141] libmachine: Docker is up and running!
	I1018 14:08:59.668875 1760410 main.go:141] libmachine: Reticulating splines...
	I1018 14:08:59.668883 1760410 client.go:171] duration metric: took 21.185236601s to LocalClient.Create
	I1018 14:08:59.668913 1760410 start.go:167] duration metric: took 21.185315141s to libmachine.API.Create "addons-891059"
	I1018 14:08:59.668930 1760410 start.go:293] postStartSetup for "addons-891059" (driver="kvm2")
	I1018 14:08:59.668947 1760410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:08:59.668967 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.669233 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:08:59.669269 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.671533 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.671957 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.671985 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.672144 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.672364 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.672523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.672667 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.764031 1760410 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:08:59.769115 1760410 info.go:137] Remote host: Buildroot 2025.02
	I1018 14:08:59.769146 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 14:08:59.769224 1760410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 14:08:59.769248 1760410 start.go:296] duration metric: took 100.307576ms for postStartSetup
	I1018 14:08:59.769292 1760410 main.go:141] libmachine: (addons-891059) Calling .GetConfigRaw
	I1018 14:08:59.769961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.773479 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.773901 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.773934 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.774210 1760410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/config.json ...
	I1018 14:08:59.774465 1760410 start.go:128] duration metric: took 21.309794025s to createHost
	I1018 14:08:59.774492 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.777128 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777506 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.777535 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.777745 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.777961 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778171 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.778305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.778500 1760410 main.go:141] libmachine: Using SSH client type: native
	I1018 14:08:59.778740 1760410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1018 14:08:59.778756 1760410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 14:08:59.897254 1760410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760796539.858103251
	
	I1018 14:08:59.897279 1760410 fix.go:216] guest clock: 1760796539.858103251
	I1018 14:08:59.897287 1760410 fix.go:229] Guest: 2025-10-18 14:08:59.858103251 +0000 UTC Remote: 2025-10-18 14:08:59.774480854 +0000 UTC m=+21.430607980 (delta=83.622397ms)
	I1018 14:08:59.897336 1760410 fix.go:200] guest clock delta is within tolerance: 83.622397ms
	I1018 14:08:59.897364 1760410 start.go:83] releasing machines lock for "addons-891059", held for 21.432776387s
	I1018 14:08:59.897398 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.897684 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:08:59.901076 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901487 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.901521 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.901705 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902565 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902783 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:08:59.902886 1760410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:08:59.902954 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.903079 1760410 ssh_runner.go:195] Run: cat /version.json
	I1018 14:08:59.903102 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:08:59.906580 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.906633 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907079 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907125 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:08:59.907149 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907167 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:08:59.907386 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907427 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:08:59.907642 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907647 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:08:59.907824 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.907846 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:08:59.908031 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.908099 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:08:59.992932 1760410 ssh_runner.go:195] Run: systemctl --version
	I1018 14:09:00.021820 1760410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:09:00.183446 1760410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:09:00.190803 1760410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:09:00.190911 1760410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:09:00.213058 1760410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:09:00.213091 1760410 start.go:495] detecting cgroup driver to use...
	I1018 14:09:00.213178 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:09:00.233624 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:09:00.252522 1760410 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:09:00.252617 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:09:00.272205 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:09:00.289717 1760410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:09:00.439992 1760410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:09:00.649208 1760410 docker.go:234] disabling docker service ...
	I1018 14:09:00.649292 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:09:00.666373 1760410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:09:00.682992 1760410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:09:00.835422 1760410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:09:00.982700 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:09:00.999428 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:09:01.024799 1760410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:09:01.024906 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.038654 1760410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 14:09:01.038752 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.052374 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.066305 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.080191 1760410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:09:01.094600 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.108084 1760410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.131069 1760410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:09:01.144608 1760410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:09:01.156726 1760410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:09:01.156791 1760410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:09:01.180230 1760410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:09:01.193680 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:01.335791 1760410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:09:01.461561 1760410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:09:01.461683 1760410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:09:01.467775 1760410 start.go:563] Will wait 60s for crictl version
	I1018 14:09:01.467870 1760410 ssh_runner.go:195] Run: which crictl
	I1018 14:09:01.472812 1760410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 14:09:01.516410 1760410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 14:09:01.516518 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.548303 1760410 ssh_runner.go:195] Run: crio --version
	I1018 14:09:01.582529 1760410 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 14:09:01.583814 1760410 main.go:141] libmachine: (addons-891059) Calling .GetIP
	I1018 14:09:01.588147 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588628 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:01.588667 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:01.588973 1760410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 14:09:01.594159 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:01.610280 1760410 kubeadm.go:883] updating cluster {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:09:01.610462 1760410 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:09:01.610527 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:01.648777 1760410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 14:09:01.648866 1760410 ssh_runner.go:195] Run: which lz4
	I1018 14:09:01.653595 1760410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 14:09:01.658875 1760410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 14:09:01.658909 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 14:09:03.215465 1760410 crio.go:462] duration metric: took 1.561899205s to copy over tarball
	I1018 14:09:03.215548 1760410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 14:09:04.890701 1760410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.675118935s)
	I1018 14:09:04.890741 1760410 crio.go:469] duration metric: took 1.675237586s to extract the tarball
	I1018 14:09:04.890755 1760410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 14:09:04.933819 1760410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:09:04.980242 1760410 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:09:04.980269 1760410 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:09:04.980277 1760410 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1018 14:09:04.980412 1760410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-891059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:09:04.980487 1760410 ssh_runner.go:195] Run: crio config
	I1018 14:09:05.031493 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:05.031532 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:05.031561 1760410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:09:05.031594 1760410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-891059 NodeName:addons-891059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:09:05.031791 1760410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-891059"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:09:05.031889 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:09:05.045249 1760410 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:09:05.045322 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:09:05.057594 1760410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1018 14:09:05.079304 1760410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:09:05.101229 1760410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1018 14:09:05.123379 1760410 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1018 14:09:05.128149 1760410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:09:05.144740 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:05.287867 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:05.310139 1760410 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059 for IP: 192.168.39.100
	I1018 14:09:05.310175 1760410 certs.go:195] generating shared ca certs ...
	I1018 14:09:05.310203 1760410 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.310412 1760410 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 14:09:05.928678 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt ...
	I1018 14:09:05.928717 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt: {Name:mk48305fdb94e31a92b48facef68eec843776b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.928918 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key ...
	I1018 14:09:05.928931 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key: {Name:mk701e118ad43b61f158a839f73ec6b965102354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:05.929018 1760410 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 14:09:06.043454 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt ...
	I1018 14:09:06.043488 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt: {Name:mk77ddeb4af674721966c75040f4f1fb5d69023d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043679 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key ...
	I1018 14:09:06.043694 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key: {Name:mk65d64f37c13d41fae5e3b77d20098229c0b1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.043772 1760410 certs.go:257] generating profile certs ...
	I1018 14:09:06.043835 1760410 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key
	I1018 14:09:06.043862 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt with IP's: []
	I1018 14:09:06.259815 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt ...
	I1018 14:09:06.259852 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: {Name:mk812f759d940b265a8e60c894cb050949fd9e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260037 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key ...
	I1018 14:09:06.260054 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.key: {Name:mk50fce6a65f5d969bea0e1a48d418e711ccdfe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.260134 1760410 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa
	I1018 14:09:06.260154 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I1018 14:09:06.486406 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa ...
	I1018 14:09:06.486442 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa: {Name:mk13f44e79eaa89077b52da6090b647e00b64732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486629 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa ...
	I1018 14:09:06.486643 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa: {Name:mkbe94bfad32eaf986c1751799d5eb527ff32552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.486733 1760410 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt
	I1018 14:09:06.486836 1760410 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key.c2889daa -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key
	I1018 14:09:06.486900 1760410 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key
	I1018 14:09:06.486924 1760410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt with IP's: []
	I1018 14:09:06.798152 1760410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt ...
	I1018 14:09:06.798201 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt: {Name:mk29883864de081c2ef5f64c49afd825bbef9059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798410 1760410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key ...
	I1018 14:09:06.798426 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key: {Name:mk619e894bc6a3076fe0e333221023492d7ff3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:06.798649 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 14:09:06.798690 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:09:06.798715 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:09:06.798735 1760410 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 14:09:06.799486 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:09:06.845692 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:09:06.882745 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:09:06.918371 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 14:09:06.952411 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:09:06.985595 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:09:07.018257 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:09:07.051475 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:09:07.086174 1760410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:09:07.118849 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:09:07.141590 1760410 ssh_runner.go:195] Run: openssl version
	I1018 14:09:07.148896 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:09:07.163684 1760410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169573 1760410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.169638 1760410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:09:07.177781 1760410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:09:07.192577 1760410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:09:07.199705 1760410 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:09:07.199768 1760410 kubeadm.go:400] StartCluster: {Name:addons-891059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-891059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:09:07.199879 1760410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:09:07.199953 1760410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:09:07.241737 1760410 cri.go:89] found id: ""
	I1018 14:09:07.241827 1760410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:09:07.254574 1760410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:09:07.267441 1760410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:09:07.280136 1760410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:09:07.280159 1760410 kubeadm.go:157] found existing configuration files:
	
	I1018 14:09:07.280207 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:09:07.292712 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:09:07.292791 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:09:07.305268 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:09:07.317524 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:09:07.317645 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:09:07.330484 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.342579 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:09:07.342663 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:09:07.355673 1760410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:09:07.367952 1760410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:09:07.368036 1760410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:09:07.381331 1760410 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 14:09:07.547925 1760410 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:09:20.098002 1760410 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:09:20.098063 1760410 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:09:20.098145 1760410 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:09:20.098299 1760410 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:09:20.098447 1760410 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:09:20.098529 1760410 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:09:20.100393 1760410 out.go:252]   - Generating certificates and keys ...
	I1018 14:09:20.100495 1760410 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:09:20.100629 1760410 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:09:20.100764 1760410 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:09:20.100857 1760410 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:09:20.100964 1760410 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:09:20.101051 1760410 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:09:20.101129 1760410 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:09:20.101315 1760410 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101405 1760410 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:09:20.101571 1760410 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-891059 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1018 14:09:20.101672 1760410 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:09:20.101744 1760410 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:09:20.101795 1760410 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:09:20.101843 1760410 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:09:20.101896 1760410 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:09:20.101961 1760410 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:09:20.102011 1760410 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:09:20.102082 1760410 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:09:20.102127 1760410 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:09:20.102199 1760410 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:09:20.102260 1760410 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:09:20.103813 1760410 out.go:252]   - Booting up control plane ...
	I1018 14:09:20.103893 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:09:20.103954 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:09:20.104007 1760410 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:09:20.104089 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:09:20.104181 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:09:20.104334 1760410 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:09:20.104446 1760410 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:09:20.104482 1760410 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:09:20.104625 1760410 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:09:20.104745 1760410 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:09:20.104820 1760410 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50245312s
	I1018 14:09:20.104902 1760410 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:09:20.104976 1760410 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.100:8443/livez
	I1018 14:09:20.105057 1760410 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:09:20.105126 1760410 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:09:20.105186 1760410 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.213660902s
	I1018 14:09:20.105249 1760410 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.327835251s
	I1018 14:09:20.105309 1760410 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50283692s
	I1018 14:09:20.105410 1760410 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:09:20.105516 1760410 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:09:20.105572 1760410 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:09:20.105752 1760410 kubeadm.go:318] [mark-control-plane] Marking the node addons-891059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:09:20.105817 1760410 kubeadm.go:318] [bootstrap-token] Using token: ci4c4o.8llcllq96muz9osf
	I1018 14:09:20.108036 1760410 out.go:252]   - Configuring RBAC rules ...
	I1018 14:09:20.108126 1760410 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:09:20.108210 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:09:20.108332 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:09:20.108465 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:09:20.108571 1760410 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:09:20.108668 1760410 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:09:20.108821 1760410 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:09:20.108863 1760410 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:09:20.108900 1760410 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:09:20.108911 1760410 kubeadm.go:318] 
	I1018 14:09:20.108961 1760410 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:09:20.108967 1760410 kubeadm.go:318] 
	I1018 14:09:20.109026 1760410 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:09:20.109031 1760410 kubeadm.go:318] 
	I1018 14:09:20.109051 1760410 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:09:20.109098 1760410 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:09:20.109140 1760410 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:09:20.109146 1760410 kubeadm.go:318] 
	I1018 14:09:20.109214 1760410 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:09:20.109221 1760410 kubeadm.go:318] 
	I1018 14:09:20.109258 1760410 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:09:20.109264 1760410 kubeadm.go:318] 
	I1018 14:09:20.109311 1760410 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:09:20.109381 1760410 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:09:20.109469 1760410 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:09:20.109488 1760410 kubeadm.go:318] 
	I1018 14:09:20.109554 1760410 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:09:20.109622 1760410 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:09:20.109628 1760410 kubeadm.go:318] 
	I1018 14:09:20.109698 1760410 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.109796 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 \
	I1018 14:09:20.109908 1760410 kubeadm.go:318] 	--control-plane 
	I1018 14:09:20.109934 1760410 kubeadm.go:318] 
	I1018 14:09:20.110067 1760410 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:09:20.110077 1760410 kubeadm.go:318] 
	I1018 14:09:20.110176 1760410 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ci4c4o.8llcllq96muz9osf \
	I1018 14:09:20.110279 1760410 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b3c5d368998c8b590f32f5883c53beccabaf63a2ceb2a6106ae6129f9dfd2290 
	I1018 14:09:20.110293 1760410 cni.go:84] Creating CNI manager for ""
	I1018 14:09:20.110301 1760410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:09:20.111886 1760410 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 14:09:20.113016 1760410 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 14:09:20.127933 1760410 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 14:09:20.158289 1760410 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:09:20.158398 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.158416 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-891059 minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-891059 minikube.k8s.io/primary=true
	I1018 14:09:20.315678 1760410 ops.go:34] apiserver oom_adj: -16
	I1018 14:09:20.315834 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:20.816073 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.316085 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:21.816909 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.316182 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:22.816708 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.316221 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:23.816476 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.316683 1760410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:09:24.414532 1760410 kubeadm.go:1113] duration metric: took 4.256222081s to wait for elevateKubeSystemPrivileges
	I1018 14:09:24.414583 1760410 kubeadm.go:402] duration metric: took 17.214819054s to StartCluster
	I1018 14:09:24.414614 1760410 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.414803 1760410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:09:24.415376 1760410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:09:24.415641 1760410 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:09:24.415700 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:09:24.415754 1760410 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:09:24.415887 1760410 addons.go:69] Setting yakd=true in profile "addons-891059"
	I1018 14:09:24.415896 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.415930 1760410 addons.go:238] Setting addon yakd=true in "addons-891059"
	I1018 14:09:24.415941 1760410 addons.go:69] Setting registry-creds=true in profile "addons-891059"
	I1018 14:09:24.415953 1760410 addons.go:238] Setting addon registry-creds=true in "addons-891059"
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.415979 1760410 addons.go:69] Setting volcano=true in profile "addons-891059"
	I1018 14:09:24.415983 1760410 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-891059"
	I1018 14:09:24.415991 1760410 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.415998 1760410 addons.go:69] Setting volumesnapshots=true in profile "addons-891059"
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-891059"
	I1018 14:09:24.415959 1760410 addons.go:69] Setting inspektor-gadget=true in profile "addons-891059"
	I1018 14:09:24.416026 1760410 addons.go:69] Setting storage-provisioner=true in profile "addons-891059"
	I1018 14:09:24.416035 1760410 addons.go:238] Setting addon storage-provisioner=true in "addons-891059"
	I1018 14:09:24.415990 1760410 addons.go:238] Setting addon volcano=true in "addons-891059"
	I1018 14:09:24.416051 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416063 1760410 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-891059"
	I1018 14:09:24.416073 1760410 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-891059"
	I1018 14:09:24.416105 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416110 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416136 1760410 addons.go:69] Setting metrics-server=true in profile "addons-891059"
	I1018 14:09:24.416172 1760410 addons.go:238] Setting addon metrics-server=true in "addons-891059"
	I1018 14:09:24.416211 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416266 1760410 addons.go:69] Setting registry=true in profile "addons-891059"
	I1018 14:09:24.416290 1760410 addons.go:238] Setting addon registry=true in "addons-891059"
	I1018 14:09:24.416318 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416454 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416462 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.415971 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416496 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416504 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416536 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416546 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416565 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416634 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416702 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.416010 1760410 addons.go:238] Setting addon volumesnapshots=true in "addons-891059"
	I1018 14:09:24.416740 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.416750 1760410 addons.go:69] Setting cloud-spanner=true in profile "addons-891059"
	I1018 14:09:24.416761 1760410 addons.go:238] Setting addon cloud-spanner=true in "addons-891059"
	I1018 14:09:24.416772 1760410 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-891059"
	I1018 14:09:24.416738 1760410 addons.go:69] Setting gcp-auth=true in profile "addons-891059"
	I1018 14:09:24.416797 1760410 mustload.go:65] Loading cluster: addons-891059
	I1018 14:09:24.416803 1760410 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:24.416808 1760410 addons.go:69] Setting ingress-dns=true in profile "addons-891059"
	I1018 14:09:24.416054 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.416816 1760410 addons.go:69] Setting default-storageclass=true in profile "addons-891059"
	I1018 14:09:24.416827 1760410 addons.go:69] Setting ingress=true in profile "addons-891059"
	I1018 14:09:24.416838 1760410 addons.go:238] Setting addon ingress=true in "addons-891059"
	I1018 14:09:24.416838 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-891059"
	I1018 14:09:24.416009 1760410 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-891059"
	I1018 14:09:24.416036 1760410 addons.go:238] Setting addon inspektor-gadget=true in "addons-891059"
	I1018 14:09:24.416819 1760410 addons.go:238] Setting addon ingress-dns=true in "addons-891059"
	I1018 14:09:24.417180 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417202 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417220 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417277 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417301 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417457 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417670 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417700 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417772 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.417855 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.417889 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.417365 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418030 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418152 1760410 config.go:182] Loaded profile config "addons-891059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:09:24.418393 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418444 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.418521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418552 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.418624 1760410 out.go:179] * Verifying Kubernetes components...
	I1018 14:09:24.418907 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.418967 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422521 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.422570 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.422950 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.423390 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.423424 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.425453 1760410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:09:24.428788 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.428847 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.432739 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.432818 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.446515 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I1018 14:09:24.447603 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.448044 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I1018 14:09:24.448620 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.449130 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.449150 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450319 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.450375 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.450390 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452314 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.452974 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.453024 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.455440 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I1018 14:09:24.456592 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.456640 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.459616 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I1018 14:09:24.459757 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I1018 14:09:24.459794 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I1018 14:09:24.460277 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.460735 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I1018 14:09:24.460955 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463457 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.463624 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463650 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.463943 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.463970 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.464096 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.464766 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.464811 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.466143 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.466259 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.466646 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.467503 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.467526 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.468700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.468724 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.469056 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.469102 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.469455 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.469522 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.470074 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.470106 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.470616 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.470636 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.471024 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1018 14:09:24.471853 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.472590 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.472616 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.473010 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.473088 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473315 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.473750 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I1018 14:09:24.474289 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.474360 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.474951 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.477612 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I1018 14:09:24.478762 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.479308 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.479333 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.479844 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.480258 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.480895 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.482303 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1018 14:09:24.483440 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.483700 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483715 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.483863 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.483872 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.484222 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.484556 1760410 addons.go:238] Setting addon default-storageclass=true in "addons-891059"
	I1018 14:09:24.484598 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.484735 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.484774 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.484961 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.485003 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.485644 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.486185 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.486221 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.488758 1760410 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-891059"
	I1018 14:09:24.488809 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489181 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.489230 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.489519 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:24.489701 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I1018 14:09:24.494198 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1018 14:09:24.495236 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.496047 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.496066 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.496101 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I1018 14:09:24.496638 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I1018 14:09:24.496952 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.497036 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1018 14:09:24.497223 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497670 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.497914 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.498318 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1018 14:09:24.498718 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.498744 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499070 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499580 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.499603 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.499631 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.499677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.499736 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.500137 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.500171 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500183 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500231 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.500253 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.500704 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.500747 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501004 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501037 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.501047 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.501305 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.501852 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.501890 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.505372 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.505855 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508424 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.508460 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.508580 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.509093 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.509143 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.510293 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I1018 14:09:24.510851 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.511364 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.512160 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.512181 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.512251 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.513848 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:09:24.513854 1760410 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:09:24.515867 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:09:24.515885 1760410 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:09:24.515912 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.516312 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.517033 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.517295 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.517359 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519170 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:24.519288 1760410 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:09:24.520436 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:24.520516 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:09:24.520549 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521274 1760410 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:24.521295 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:09:24.521320 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.521822 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1018 14:09:24.522725 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.523307 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.523325 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.523932 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.524192 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.527503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.527590 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527618 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.527649 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.527682 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1018 14:09:24.528451 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.528456 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.528513 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.528706 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.528847 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.529262 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.529279 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.529677 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.529956 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.530621 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I1018 14:09:24.531189 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.531587 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1018 14:09:24.532552 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.532587 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.533165 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.533199 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.534272 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.534329 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.534670 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1018 14:09:24.534888 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.534927 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.534934 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.535018 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I1018 14:09:24.535456 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536405 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.536423 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.536459 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536498 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.536522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.536586 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.536638 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.536641 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.536797 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.536878 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.537335 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.537386 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.537814 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I1018 14:09:24.537939 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538069 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.538085 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.538431 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.538510 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.538875 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.539073 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.539143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:09:24.540559 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.540650 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.540661 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.540287 1760410 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:09:24.540789 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1018 14:09:24.541394 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541512 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.541542 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.541580 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.542392 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.542582 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.542593 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.541968 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.541995 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.542027 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.541787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.542477 1760410 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:24.542769 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:09:24.542787 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.543139 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.543258 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:09:24.543232 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.543329 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.544059 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.544119 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.544691 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.544728 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.545623 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.545670 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.547151 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:09:24.547560 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.548774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.548901 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:09:24.549486 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.549513 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:24.549520 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1018 14:09:24.549555 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:24.549743 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.549944 1760410 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:09:24.549986 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:09:24.550111 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.550462 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.550548 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.551322 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:09:24.551448 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:09:24.551471 1760410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:09:24.551503 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.552417 1760410 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:09:24.552611 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.552668 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.552694 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.553138 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.553466 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:09:24.553546 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:09:24.553557 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:09:24.553575 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.555796 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:09:24.556091 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.556537 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.559463 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:09:24.560143 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I1018 14:09:24.560689 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:09:24.560709 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:09:24.560733 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.561360 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.562223 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.562248 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.562334 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1018 14:09:24.564735 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564798 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.564809 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.564889 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.564947 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1018 14:09:24.565207 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.565656 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.565686 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.565804 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.565867 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.566012 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.566138 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.566251 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.566837 1760410 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:09:24.566841 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.566954 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.567074 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.567098 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.567382 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.567544 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.567609 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.567849 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.568018 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.568167 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.568390 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:24.568518 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:09:24.568539 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.568408 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.569303 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.569321 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.569601 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1018 14:09:24.569798 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.569904 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I1018 14:09:24.570247 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570534 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.570627 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.570989 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.571754 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.571776 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.571809 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.571835 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.571888 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.571942 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.572034 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I1018 14:09:24.572101 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572114 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.572301 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.572420 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.572512 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.572532 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.572545 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:24.572552 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:24.572560 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:24.573079 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:24.573081 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.573095 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.573102 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:24.573108 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.573114 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:24.573205 1760410 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:09:24.573206 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.573377 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.573909 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.574598 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.574613 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.574986 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.575284 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.575403 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.576055 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I1018 14:09:24.576282 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.576635 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.576750 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577145 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.577164 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.577387 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.577425 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578449 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.578485 1760410 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:09:24.578527 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.578725 1760410 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:24.578741 1760410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:09:24.578760 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.578783 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.579845 1760410 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:09:24.579890 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:09:24.579901 1760410 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:09:24.579916 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.579866 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.579966 1760410 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:09:24.581298 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.581518 1760410 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:09:24.581555 1760410 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:24.581566 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:09:24.581582 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.581701 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:09:24.581733 1760410 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:09:24.581762 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582432 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.582611 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.582663 1760410 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:24.582679 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:09:24.582698 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.582744 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.583429 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.583635 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.583761 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.583832 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.584362 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I1018 14:09:24.584568 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:09:24.585155 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:24.585916 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:24.585938 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:24.586019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.586361 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:24.586383 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:09:24.586403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.586683 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:24.586913 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:24.587506 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587537 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.587565 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.587802 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.587988 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.588388 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.588708 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.588631 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.588734 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.589129 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.589325 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.589522 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.590171 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.590296 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:24.590321 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.590811 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591126 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591174 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591319 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.591484 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591523 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.591739 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.591761 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.591773 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.591922 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592011 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592200 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592253 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.592273 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.592387 1760410 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:09:24.592403 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592465 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.592624 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.592714 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.592859 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.592993 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.593164 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593741 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.593774 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.593963 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.594146 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.594295 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.594464 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:24.595795 1760410 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:09:24.597040 1760410 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:24.597063 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:09:24.597082 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:24.600612 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.600998 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:24.601019 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:24.601363 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:24.601584 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:24.601753 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:24.601908 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	W1018 14:09:24.742102 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.742153 1760410 retry.go:31] will retry after 155.166839ms: ssh: handshake failed: read tcp 192.168.39.1:60786->192.168.39.100:22: read: connection reset by peer
	W1018 14:09:24.905499 1760410 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:24.905539 1760410 retry.go:31] will retry after 290.251665ms: ssh: handshake failed: read tcp 192.168.39.1:60832->192.168.39.100:22: read: connection reset by peer
	I1018 14:09:25.195583 1760410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:09:25.195661 1760410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:09:25.238678 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:09:25.238705 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:09:25.239580 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:09:25.243439 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:09:25.244497 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:09:25.264037 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:09:25.312273 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:09:25.315550 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:09:25.315578 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:09:25.320939 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:09:25.324940 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:09:25.324962 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:09:25.327771 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:09:25.328434 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:09:25.339706 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:09:25.339737 1760410 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:09:25.369886 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:09:25.369914 1760410 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:09:25.370459 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:09:25.537261 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:09:25.537300 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:09:25.585100 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:09:25.585145 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:09:25.685376 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:09:25.685407 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:09:25.768517 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:09:25.768553 1760410 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:09:25.768978 1760410 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:25.769004 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:09:25.814134 1760410 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:25.814164 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:09:25.853698 1760410 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:25.853731 1760410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:09:26.014188 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:09:26.014222 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:09:26.060465 1760410 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:09:26.060498 1760410 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:09:26.091905 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:09:26.091940 1760410 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:09:26.114081 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:09:26.248999 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:26.271395 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:09:26.432032 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:09:26.432068 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:09:26.436207 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:09:26.436242 1760410 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:09:26.558205 1760410 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:26.558233 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:09:26.717226 1760410 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:26.717268 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:09:26.717225 1760410 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:09:26.717386 1760410 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:09:26.825284 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:09:27.137937 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:09:27.137970 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:09:27.440610 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:27.873332 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:09:27.873382 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:09:28.056527 1760410 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.860893783s)
	I1018 14:09:28.056563 1760410 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 14:09:28.056618 1760410 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.860884504s)
	I1018 14:09:28.056693 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.817081387s)
	I1018 14:09:28.056751 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056765 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.056766 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.813291284s)
	I1018 14:09:28.056811 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.056828 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057259 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057276 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057280 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057300 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057326 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057416 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057439 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057482 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:28.057493 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:28.057712 1760410 node_ready.go:35] waiting up to 6m0s for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.057737 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057777 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057784 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.057851 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:28.057951 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:28.057965 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:28.062488 1760410 node_ready.go:49] node "addons-891059" is "Ready"
	I1018 14:09:28.062522 1760410 node_ready.go:38] duration metric: took 4.780102ms for node "addons-891059" to be "Ready" ...
	I1018 14:09:28.062537 1760410 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:09:28.062602 1760410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:09:28.633793 1760410 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-891059" context rescaled to 1 replicas
	I1018 14:09:28.657122 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:09:28.657153 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:09:29.297640 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:09:29.297673 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:09:29.722108 1760410 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:29.722138 1760410 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:09:30.201846 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:09:31.747160 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.502603848s)
	I1018 14:09:31.747234 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747249 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747635 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.747662 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.747675 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:31.747685 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:31.747976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:31.748000 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:31.989912 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:09:31.989960 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:31.993852 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994463 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:31.994498 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:31.994763 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:31.995004 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:31.995210 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:31.995372 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:32.401099 1760410 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:09:32.582819 1760410 addons.go:238] Setting addon gcp-auth=true in "addons-891059"
	I1018 14:09:32.582898 1760410 host.go:66] Checking if "addons-891059" exists ...
	I1018 14:09:32.583276 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.583338 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.598366 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1018 14:09:32.598979 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.599565 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.599588 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.599990 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.600582 1760410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:09:32.600654 1760410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:09:32.615909 1760410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1018 14:09:32.616524 1760410 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:09:32.616999 1760410 main.go:141] libmachine: Using API Version  1
	I1018 14:09:32.617024 1760410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:09:32.617441 1760410 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:09:32.617696 1760410 main.go:141] libmachine: (addons-891059) Calling .GetState
	I1018 14:09:32.619651 1760410 main.go:141] libmachine: (addons-891059) Calling .DriverName
	I1018 14:09:32.619882 1760410 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:09:32.619905 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHHostname
	I1018 14:09:32.623262 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.623788 1760410 main.go:141] libmachine: (addons-891059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:2f:9d", ip: ""} in network mk-addons-891059: {Iface:virbr1 ExpiryTime:2025-10-18 15:08:55 +0000 UTC Type:0 Mac:52:54:00:12:2f:9d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-891059 Clientid:01:52:54:00:12:2f:9d}
	I1018 14:09:32.623815 1760410 main.go:141] libmachine: (addons-891059) DBG | domain addons-891059 has defined IP address 192.168.39.100 and MAC address 52:54:00:12:2f:9d in network mk-addons-891059
	I1018 14:09:32.624039 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHPort
	I1018 14:09:32.624251 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHKeyPath
	I1018 14:09:32.624440 1760410 main.go:141] libmachine: (addons-891059) Calling .GetSSHUsername
	I1018 14:09:32.624678 1760410 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/addons-891059/id_rsa Username:docker}
	I1018 14:09:34.410431 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.146350667s)
	I1018 14:09:34.410505 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410520 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410535 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.098229729s)
	I1018 14:09:34.410591 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410608 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410627 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.08966013s)
	I1018 14:09:34.410671 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410688 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410780 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.082972673s)
	I1018 14:09:34.410825 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410842 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410885 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.082422149s)
	I1018 14:09:34.410912 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.410921 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.410996 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.040510674s)
	I1018 14:09:34.411019 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411040 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411044 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411064 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411075 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411083 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411111 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411122 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.29701229s)
	I1018 14:09:34.411143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411148 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411161 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411170 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411178 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411185 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411186 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411194 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411202 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411209 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411237 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.162212378s)
	W1018 14:09:34.411260 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411279 1760410 retry.go:31] will retry after 156.548971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:34.411277 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.411304 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.411320 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.411329 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411355 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411385 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.139958439s)
	I1018 14:09:34.411415 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411426 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411451 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.586135977s)
	I1018 14:09:34.411563 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.411581 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.411476 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413776 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413792 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413803 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413813 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413821 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413830 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413837 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413839 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413857 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413878 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413884 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413892 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.413899 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.413949 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.413963 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413976 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.413984 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413993 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414003 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414010 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.414017 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.414067 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414253 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414280 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414288 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.414297 1760410 addons.go:479] Verifying addon metrics-server=true in "addons-891059"
	I1018 14:09:34.414448 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414488 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.414509 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.414541 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.415992 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416015 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416023 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416037 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416049 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416063 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.415991 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416140 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416177 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416185 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.416194 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.416025 1760410 addons.go:479] Verifying addon ingress=true in "addons-891059"
	I1018 14:09:34.416625 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416635 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.413977 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.416602 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.416980 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.416993 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418102 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.418150 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.418163 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.418177 1760410 addons.go:479] Verifying addon registry=true in "addons-891059"
	I1018 14:09:34.418831 1760410 out.go:179] * Verifying ingress addon...
	I1018 14:09:34.418835 1760410 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-891059 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:09:34.420852 1760410 out.go:179] * Verifying registry addon...
	I1018 14:09:34.422521 1760410 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:09:34.423238 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:09:34.503158 1760410 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:09:34.503192 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.503257 1760410 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:09:34.503271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.568542 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:34.621858 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.621880 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.622193 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.622248 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.622262 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:09:34.622394 1760410 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:09:34.659969 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:34.659996 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:34.660315 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:34.660316 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:34.660354 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:34.941419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:34.942360 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:34.990391 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.549686758s)
	I1018 14:09:34.990429 1760410 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.927791238s)
	I1018 14:09:34.990461 1760410 api_server.go:72] duration metric: took 10.57479054s to wait for apiserver process to appear ...
	W1018 14:09:34.990458 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990494 1760410 retry.go:31] will retry after 178.461593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:09:34.990467 1760410 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:09:34.990545 1760410 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1018 14:09:35.010676 1760410 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1018 14:09:35.013686 1760410 api_server.go:141] control plane version: v1.34.1
	I1018 14:09:35.013719 1760410 api_server.go:131] duration metric: took 23.188895ms to wait for apiserver health ...
	I1018 14:09:35.013750 1760410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:09:35.060072 1760410 system_pods.go:59] 16 kube-system pods found
	I1018 14:09:35.060119 1760410 system_pods.go:61] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.060127 1760410 system_pods.go:61] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060138 1760410 system_pods.go:61] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.060145 1760410 system_pods.go:61] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.060149 1760410 system_pods.go:61] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.060152 1760410 system_pods.go:61] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.060157 1760410 system_pods.go:61] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.060160 1760410 system_pods.go:61] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.060163 1760410 system_pods.go:61] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.060168 1760410 system_pods.go:61] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.060178 1760410 system_pods.go:61] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.060186 1760410 system_pods.go:61] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.060194 1760410 system_pods.go:61] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.060203 1760410 system_pods.go:61] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.060209 1760410 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.060218 1760410 system_pods.go:61] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.060229 1760410 system_pods.go:74] duration metric: took 46.469158ms to wait for pod list to return data ...
	I1018 14:09:35.060248 1760410 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:09:35.104632 1760410 default_sa.go:45] found service account: "default"
	I1018 14:09:35.104663 1760410 default_sa.go:55] duration metric: took 44.40546ms for default service account to be created ...
	I1018 14:09:35.104677 1760410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:09:35.169265 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:09:35.176957 1760410 system_pods.go:86] 17 kube-system pods found
	I1018 14:09:35.177007 1760410 system_pods.go:89] "amd-gpu-device-plugin-c5cbb" [64430541-160f-413b-b21e-6636047a8859] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:09:35.177019 1760410 system_pods.go:89] "coredns-66bc5c9577-9t6mk" [d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177052 1760410 system_pods.go:89] "coredns-66bc5c9577-nf592" [e1dcbe4f-f240-4a2f-a4ff-686ee74288d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:09:35.177068 1760410 system_pods.go:89] "etcd-addons-891059" [d809b325-765e-4e94-9832-03ad283377f1] Running
	I1018 14:09:35.177079 1760410 system_pods.go:89] "kube-apiserver-addons-891059" [edc4bec3-9171-4df8-a0e4-556ac2ece3e1] Running
	I1018 14:09:35.177087 1760410 system_pods.go:89] "kube-controller-manager-addons-891059" [03f45aa3-88da-45f0-9932-fa0a92d33e62] Running
	I1018 14:09:35.177100 1760410 system_pods.go:89] "kube-ingress-dns-minikube" [2d2be3a2-f8a7-4762-a4a6-aeea42df7e21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:09:35.177106 1760410 system_pods.go:89] "kube-proxy-ckpzl" [a3ac992c-4401-40f5-93dd-7a525ec3b2a5] Running
	I1018 14:09:35.177117 1760410 system_pods.go:89] "kube-scheduler-addons-891059" [54facfd7-1a3c-4565-8ffb-d4ef204a0858] Running
	I1018 14:09:35.177125 1760410 system_pods.go:89] "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:09:35.177134 1760410 system_pods.go:89] "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:09:35.177145 1760410 system_pods.go:89] "registry-6b586f9694-z6m2x" [e32c82d5-bbaf-47cf-a6dd-4488d4e419e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:09:35.177156 1760410 system_pods.go:89] "registry-creds-764b6fb674-sg8jp" [55d9e015-f26a-4270-8187-b8312c331504] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:09:35.177171 1760410 system_pods.go:89] "registry-proxy-tmmvd" [cb52b147-d27f-4a99-9ec8-ffd5f90861e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:09:35.177180 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9tnq" [a028a732-94f8-46f5-8ade-adc72e44a92d] Pending
	I1018 14:09:35.177187 1760410 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzhfk" [f3e3fb2c-05b7-448d-bca6-3438d70868b1] Pending
	I1018 14:09:35.177198 1760410 system_pods.go:89] "storage-provisioner" [a6f8bdeb-9db0-44f3-b3cb-8396901acaf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:09:35.177213 1760410 system_pods.go:126] duration metric: took 72.526149ms to wait for k8s-apps to be running ...
	I1018 14:09:35.177228 1760410 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:09:35.177303 1760410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:09:35.445832 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.461317 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:35.939729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:35.942319 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.445234 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.243330128s)
	I1018 14:09:36.445310 1760410 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.825399752s)
	I1018 14:09:36.445314 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445449 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.445853 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:36.445924 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.445941 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.445953 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:36.445962 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:36.446272 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:36.446292 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:36.446304 1760410 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-891059"
	I1018 14:09:36.447257 1760410 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:09:36.448070 1760410 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:09:36.449546 1760410 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:09:36.450329 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:09:36.450870 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:09:36.450894 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:09:36.458277 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.471857 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.484451 1760410 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:09:36.484481 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:36.597464 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:09:36.597499 1760410 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:09:36.732996 1760410 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.733028 1760410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:09:36.885741 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:09:36.948270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:36.948391 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:36.960478 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.436446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.439412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.456938 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:37.927403 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:37.928102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:37.956527 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.404132 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.835532164s)
	W1018 14:09:38.404196 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.404224 1760410 retry.go:31] will retry after 203.009637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:38.433864 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.434743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.531382 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:38.607892 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:38.751077 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.58176118s)
	I1018 14:09:38.751130 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751161 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751178 1760410 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.573842033s)
	I1018 14:09:38.751219 1760410 system_svc.go:56] duration metric: took 3.573986856s WaitForService to wait for kubelet
	I1018 14:09:38.751238 1760410 kubeadm.go:586] duration metric: took 14.335564787s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:09:38.751274 1760410 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:09:38.751483 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.751506 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751516 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.751529 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.751536 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.751791 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.751808 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.851019 1760410 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 14:09:38.851051 1760410 node_conditions.go:123] node cpu capacity is 2
	I1018 14:09:38.851069 1760410 node_conditions.go:105] duration metric: took 99.788234ms to run NodePressure ...
	I1018 14:09:38.851086 1760410 start.go:241] waiting for startup goroutines ...
	I1018 14:09:38.908065 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.022268979s)
	I1018 14:09:38.908143 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908165 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908474 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908500 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908510 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:09:38.908518 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:09:38.908801 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:09:38.908819 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:09:38.908845 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:09:38.909928 1760410 addons.go:479] Verifying addon gcp-auth=true in "addons-891059"
	I1018 14:09:38.911794 1760410 out.go:179] * Verifying gcp-auth addon...
	I1018 14:09:38.913871 1760410 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:09:38.969859 1760410 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:09:38.969881 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:38.979126 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:38.979302 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:38.999385 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.427914 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.428338 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.431173 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.465614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:39.930950 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:39.936675 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:39.942841 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:39.965308 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.421639 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.429893 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:40.429965 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:40.457177 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:40.676324 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.068378617s)
	W1018 14:09:40.676402 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.676434 1760410 retry.go:31] will retry after 741.361151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:40.925104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:40.933643 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.024046 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.027134 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.418785 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:41.422791 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.437450 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.437815 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.458160 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:41.920933 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:41.931994 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:41.932787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:41.954074 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.420874 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.427884 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.432996 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.455566 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:42.935811 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:42.935897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:42.936364 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:42.948192 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529349883s)
	W1018 14:09:42.948266 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.948305 1760410 retry.go:31] will retry after 603.252738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:42.961547 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.421694 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.425963 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.432125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.454728 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:43.552443 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:43.920168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:43.926196 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:43.932562 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:43.954780 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.418856 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.434761 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.434815 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.485100 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:44.719803 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.167302475s)
	W1018 14:09:44.719876 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.719906 1760410 retry.go:31] will retry after 756.582939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:44.919572 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:44.929974 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:44.930622 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:44.954972 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.419454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.431537 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.435706 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.458249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:45.477327 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:45.921959 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:45.932928 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:45.933443 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:45.960253 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.424197 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.434428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.437611 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.457951 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:46.721183 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243789601s)
	W1018 14:09:46.721253 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.721284 1760410 retry.go:31] will retry after 1.22541109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:46.920063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:46.927281 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:46.930483 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:46.954658 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.422281 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.427164 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.431758 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.456565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:47.926249 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:47.939833 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:47.940075 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:47.946922 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:47.966036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.420073 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.432202 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.434126 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.457282 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:48.920393 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:48.930362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:48.932858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:48.957018 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:49.201980 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255004165s)
	W1018 14:09:49.202036 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.202059 1760410 retry.go:31] will retry after 2.58897953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:49.420911 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:49.428333 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:49.430869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:49.457131 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.368228 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.376847 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.377051 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.476106 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.476372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:50.479024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.479966 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.920534 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:50.935331 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:50.938361 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:50.961186 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.424118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.430809 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.432102 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.455044 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:51.791362 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:51.922858 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:51.934999 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:51.935987 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:51.958913 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.642039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.642370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.644501 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:52.644727 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.918752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:52.926588 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:52.930871 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:52.956219 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.183831 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.392411457s)
	W1018 14:09:53.183895 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.183924 1760410 retry.go:31] will retry after 4.131889795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:53.417891 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.426911 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.428495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.454047 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:53.919491 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:53.929299 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:53.929427 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:53.958043 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.418456 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.427470 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.427657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.456313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:54.919925 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:54.927822 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:54.928397 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:54.955119 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.419222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.429271 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.430752 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.455541 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:55.918460 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:55.928654 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:55.930176 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:55.958687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.417289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.426666 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.426937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.456516 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:56.921455 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:56.931545 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:56.932200 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:56.957601 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.316649 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:09:57.422032 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.435023 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.437778 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.455440 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:57.921161 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:57.929313 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:57.929394 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:57.955970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.423288 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.439731 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.440095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.786495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.919590 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:58.930253 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:58.932272 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:58.957912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:58.980642 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.663942768s)
	W1018 14:09:58.980696 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:58.980722 1760410 retry.go:31] will retry after 6.037644719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:09:59.421401 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.428863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.429465 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.458445 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:09:59.918316 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:09:59.928753 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:09:59.928856 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:09:59.955245 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.418136 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.427048 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.428214 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.457368 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:00.919392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:00.929649 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:00.931313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:00.959561 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.420084 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.426435 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.428419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.463886 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:01.918664 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:01.927921 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:01.927979 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:01.954513 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.417929 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.426037 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.428261 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.455407 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:02.922146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:02.928949 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:02.933375 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:02.956535 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.420697 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.429208 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.432897 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.459039 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:03.918554 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:03.926959 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:03.927105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:03.955657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.418489 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.430359 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.430521 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.456644 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:04.918502 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:04.930599 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:04.930923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:04.956737 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.018763 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:05.417681 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.428004 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.429827 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.456781 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:05.917569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:05.926923 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:05.928124 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:05.957076 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.036566 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.017738492s)
	W1018 14:10:06.036634 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.036662 1760410 retry.go:31] will retry after 12.004802236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:06.419404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.429963 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.430297 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:06.457600 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:06.919260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:06.929676 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:06.929775 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.155631 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.427122 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.428776 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.457310 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:07.922270 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:07.926818 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:07.929313 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:07.956530 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.418802 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.429772 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.430398 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.456743 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:08.919063 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:08.930278 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:08.931169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:08.954708 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.424687 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.432292 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.435514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.460217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:09.923294 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:09.930199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:09.931023 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:09.955035 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.419846 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.426749 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.429140 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.456969 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:10.953436 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:10.956917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:10.957054 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:10.957495 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.418736 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.426419 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.430935 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.455617 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:11.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:11.927115 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:11.931414 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:11.960289 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.418970 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.430735 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.433659 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.456647 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:12.921054 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:12.928629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:12.928668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:12.956226 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.420386 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.427464 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.429090 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.455488 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:13.918328 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:13.927700 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:13.928318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:13.954810 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.419754 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.425924 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.427917 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.455974 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:14.925112 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:14.929625 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:14.933370 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:14.957078 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.418580 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.428235 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.429169 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.457022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:15.919800 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:15.936816 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:15.937017 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:15.957268 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.417946 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.427385 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.431794 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.456614 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:16.919525 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:16.926577 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:10:16.926658 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:16.954174 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.421789 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.426437 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.431339 1760410 kapi.go:107] duration metric: took 43.008095172s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:10:17.457873 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:17.918594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:17.929987 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:17.961960 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.042188 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:18.422928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.427500 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.456271 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:18.919452 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:18.930289 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:18.956388 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.361633 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.319335622s)
	W1018 14:10:19.361689 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.361728 1760410 retry.go:31] will retry after 15.164014777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:19.422771 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.438239 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.456621 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:19.921757 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:19.928298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:19.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.420260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.427508 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.458936 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:20.918928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:20.927378 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:20.955188 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.420104 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.426947 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.524486 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:21.918327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:21.927194 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:21.955524 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.423531 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.426633 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.454711 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:22.921113 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:22.928945 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:22.954404 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.420637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.430677 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.459231 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:23.919372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:23.928323 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:23.958731 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.420036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.427298 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.456668 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:24.919003 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:24.927657 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:24.957888 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.421338 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.427501 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.455612 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:25.918199 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:25.927869 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:25.958203 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.419024 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.428832 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.456514 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:26.918247 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:26.928171 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:26.956494 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.418446 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.430922 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.460225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:27.934863 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:27.935267 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:27.956304 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.418276 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.426282 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.455657 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:28.921058 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:28.928216 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:28.957699 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.423964 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.429784 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:29.459912 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:29.919968 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:29.926486 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.021594 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.431798 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.435432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.456454 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:30.930069 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:30.943105 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:30.955957 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.429432 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.438231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.455431 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:31.921095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:31.931309 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:31.956251 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.420152 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.428240 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.458714 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:32.922542 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:32.930043 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:32.957260 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.419500 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.428933 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.455363 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:33.923146 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:33.929585 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:33.958835 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.420137 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.426760 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.457114 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:34.526904 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:34.919159 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:34.928439 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:34.955153 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.418928 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.426233 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.458485 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:35.764870 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237905947s)
	W1018 14:10:35.764934 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.764957 1760410 retry.go:31] will retry after 14.798475806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:35.919540 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:35.928534 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:35.955008 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.450125 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.453729 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.536855 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:36.917765 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:36.925569 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:36.955287 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.419773 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.427166 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:37.456318 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:37.919552 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:37.927629 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.025256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.424973 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.428550 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.453898 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:38.919099 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:38.926293 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:38.955682 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.418953 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.430007 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.459225 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:39.920652 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:39.929231 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:39.954710 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.421937 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.429412 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.480118 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:40.920635 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:40.929091 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:40.956998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.426085 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.427988 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.459105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:41.918797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:41.926487 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:41.955036 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.420125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.428890 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.454689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:42.919029 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:42.927753 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:42.954419 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.422025 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.426830 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.457376 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:43.917234 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:43.930520 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:43.956616 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.419241 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.428799 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.456787 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:44.918484 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:44.928332 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:44.961125 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.421688 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.427032 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.457168 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:45.919022 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:45.927029 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:45.959091 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.418637 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.429220 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.455413 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:46.919149 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:46.926519 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:46.956560 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.419157 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.427737 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.455569 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:47.918673 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:47.926052 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:47.956842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.420322 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.430745 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.456105 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:48.922457 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:48.928328 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:48.956428 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.434222 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.437527 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.461279 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:49.920966 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:49.929362 1760410 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:10:49.956797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.418327 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.430238 1760410 kapi.go:107] duration metric: took 1m16.007712358s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:10:50.456335 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:50.564457 1760410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:10:50.917217 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:50.958103 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.421689 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.455392 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:51.920286 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:51.942284 1760410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.377769111s)
	W1018 14:10:51.942338 1760410 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:10:51.942424 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942439 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.942850 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.942873 1760410 main.go:141] libmachine: (addons-891059) DBG | Closing plugin on server side
	I1018 14:10:51.942875 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 14:10:51.942891 1760410 main.go:141] libmachine: Making call to close driver server
	I1018 14:10:51.942902 1760410 main.go:141] libmachine: (addons-891059) Calling .Close
	I1018 14:10:51.943167 1760410 main.go:141] libmachine: Successfully made call to close driver server
	I1018 14:10:51.943186 1760410 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 14:10:51.943290 1760410 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:10:51.956095 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.418797 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.455097 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:52.918142 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:52.955842 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.417788 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.454466 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:53.928372 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:10:53.956892 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.421372 1760410 kapi.go:107] duration metric: took 1m15.507497357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:10:54.422977 1760410 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-891059 cluster.
	I1018 14:10:54.424170 1760410 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:10:54.425362 1760410 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:10:54.455256 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:54.954565 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.455801 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:55.954326 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.455155 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:56.954954 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.455480 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:57.957998 1760410 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:10:58.454831 1760410 kapi.go:107] duration metric: took 1m22.004497442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:10:58.456573 1760410 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:10:58.457854 1760410 addons.go:514] duration metric: took 1m34.042106278s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:10:58.457949 1760410 start.go:246] waiting for cluster config update ...
	I1018 14:10:58.457975 1760410 start.go:255] writing updated cluster config ...
	I1018 14:10:58.458280 1760410 ssh_runner.go:195] Run: rm -f paused
	I1018 14:10:58.466229 1760410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:10:58.470432 1760410 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.477134 1760410 pod_ready.go:94] pod "coredns-66bc5c9577-9t6mk" is "Ready"
	I1018 14:10:58.477163 1760410 pod_ready.go:86] duration metric: took 6.703976ms for pod "coredns-66bc5c9577-9t6mk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.479169 1760410 pod_ready.go:83] waiting for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.489364 1760410 pod_ready.go:94] pod "etcd-addons-891059" is "Ready"
	I1018 14:10:58.489404 1760410 pod_ready.go:86] duration metric: took 10.207192ms for pod "etcd-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.491622 1760410 pod_ready.go:83] waiting for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.497381 1760410 pod_ready.go:94] pod "kube-apiserver-addons-891059" is "Ready"
	I1018 14:10:58.497406 1760410 pod_ready.go:86] duration metric: took 5.754148ms for pod "kube-apiserver-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.499963 1760410 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:58.870880 1760410 pod_ready.go:94] pod "kube-controller-manager-addons-891059" is "Ready"
	I1018 14:10:58.870932 1760410 pod_ready.go:86] duration metric: took 370.945889ms for pod "kube-controller-manager-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.070811 1760410 pod_ready.go:83] waiting for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.471322 1760410 pod_ready.go:94] pod "kube-proxy-ckpzl" is "Ready"
	I1018 14:10:59.471383 1760410 pod_ready.go:86] duration metric: took 400.536721ms for pod "kube-proxy-ckpzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:10:59.672128 1760410 pod_ready.go:83] waiting for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071253 1760410 pod_ready.go:94] pod "kube-scheduler-addons-891059" is "Ready"
	I1018 14:11:00.071288 1760410 pod_ready.go:86] duration metric: took 399.125586ms for pod "kube-scheduler-addons-891059" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:11:00.071306 1760410 pod_ready.go:40] duration metric: took 1.60503304s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:11:00.118648 1760410 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:11:00.120494 1760410 out.go:179] * Done! kubectl is now configured to use "addons-891059" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.524114457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796864524086090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16769125-a626-4b5f-8353-325f60d4a922 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.524891178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09a8361a-b0f6-4f5d-89e9-5470f59db728 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.525133960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09a8361a-b0f6-4f5d-89e9-5470f59db728 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.526431345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80a91b61e6217003e742ebfb64e4ab9e5c4d86d6bd4dcf5ce1a4f27f87288b3f,PodSandboxId:a57ef3e4467d48044f0a63b3de9f53b115ab8d3ebe0f1b8e7fe32582ef6d7734,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1760796857923457559,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-67bwz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: fdb7e1d4-852f-4236-9cdf-29089e1285d4,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
d7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Label
s:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985
a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandb
oxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSandboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.h
ash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kuberne
tes.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernet
es.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.
container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459
a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f
07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandbo
xId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672
ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde37
7449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09a8361a-b0f6-4f5d-89e9-5470f59db728 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.579161956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a23dcb35-6bc7-43fd-b60f-44d930261d37 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.579606515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a23dcb35-6bc7-43fd-b60f-44d930261d37 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.582257433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e001a751-9283-4910-9a2a-ee6666fe6c8d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.583650862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796864583619418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e001a751-9283-4910-9a2a-ee6666fe6c8d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.584932206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49e1bad5-2775-43f7-a0e1-cc89a3b8eefa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.585060575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49e1bad5-2775-43f7-a0e1-cc89a3b8eefa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.585978828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80a91b61e6217003e742ebfb64e4ab9e5c4d86d6bd4dcf5ce1a4f27f87288b3f,PodSandboxId:a57ef3e4467d48044f0a63b3de9f53b115ab8d3ebe0f1b8e7fe32582ef6d7734,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1760796857923457559,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-67bwz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: fdb7e1d4-852f-4236-9cdf-29089e1285d4,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
d7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Label
s:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985
a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandb
oxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSandboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.h
ash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kuberne
tes.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernet
es.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.
container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459
a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f
07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandbo
xId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672
ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde37
7449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49e1bad5-2775-43f7-a0e1-cc89a3b8eefa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.634036575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8f3f08c-4849-4406-8261-832afa4fd635 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.634128447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8f3f08c-4849-4406-8261-832afa4fd635 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.635778003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b2f559b-483d-41ca-ae55-0bf75bb70bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.636991043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796864636963948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b2f559b-483d-41ca-ae55-0bf75bb70bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.638024187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69925927-8584-4c36-95e3-682f232e71de name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.638240021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69925927-8584-4c36-95e3-682f232e71de name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.639820741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80a91b61e6217003e742ebfb64e4ab9e5c4d86d6bd4dcf5ce1a4f27f87288b3f,PodSandboxId:a57ef3e4467d48044f0a63b3de9f53b115ab8d3ebe0f1b8e7fe32582ef6d7734,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1760796857923457559,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-67bwz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: fdb7e1d4-852f-4236-9cdf-29089e1285d4,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
d7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Label
s:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985
a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandb
oxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSandboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.h
ash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kuberne
tes.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernet
es.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.
container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459
a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f
07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandbo
xId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672
ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde37
7449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69925927-8584-4c36-95e3-682f232e71de name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.689371344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=904d7e2a-cca0-4eb4-bcfc-3cb26ffdb970 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.689448458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=904d7e2a-cca0-4eb4-bcfc-3cb26ffdb970 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.691024040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20cbb738-71e3-45a2-a942-3c0aa98c11d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.692412050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760796864692379211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:520517,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20cbb738-71e3-45a2-a942-3c0aa98c11d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.693303316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e824c242-bac1-42c9-9806-69580a7f5b7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.693643481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e824c242-bac1-42c9-9806-69580a7f5b7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:14:24 addons-891059 crio[822]: time="2025-10-18 14:14:24.694783481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80a91b61e6217003e742ebfb64e4ab9e5c4d86d6bd4dcf5ce1a4f27f87288b3f,PodSandboxId:a57ef3e4467d48044f0a63b3de9f53b115ab8d3ebe0f1b8e7fe32582ef6d7734,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1760796857923457559,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-67bwz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: fdb7e1d4-852f-4236-9cdf-29089e1285d4,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4019b2f5a82ebc7fb6dabae9a874d699665a5d8c69de73eb709ca4a501ac015,PodSandboxId:871fa03a650614957b7d3d2014f39478cf8cb5cd45eb550c6abd6222b43732a9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760796662606988160,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75ccff45-9202-4152-b90e-8a5a6d306c7d,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5e462bcd2b5f465fe95346688533db6801a9c93215937bfbcf4abffe97f6c0,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1760796657878096678,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e429add87fb7915cacc16256e7047f4f649d645dd6350add56e90ceda89be5cb,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1760796656108432125,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
d7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c154e6ad0036f8e08a29b6d27bd296913987ed0f4235dd603093178c177e86b,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1760796654288737227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e42c0ad16a76575cdf86955e752de6fc61fbdffec61b610745b88dc300290e,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1760796650670836429,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ce2976bee33494c9148720fc6f41dafc7c06699c436b9f7352992e408fc1ce,PodSandboxId:2f9eb1464924400027510bd40640a85e472321a499aaff7e545d8f90a3a2b454,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760796649028158931,Label
s:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bphwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5355fea1-7cc1-4587-853e-61aaaa6f569e,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9830a2003573c4745aeef463de8c6f60ef95ad1ea86413fbba89a04f8d287e29,PodSandboxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1760796641350506570,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b41579872800aaa54c544cb3ac01bd4bfbdb75ed8bfb2068b63a461effcb494,PodSandboxId:d23e703cbfeb7f985
a5ee31bbb8e9a0beaaca929b2a9d12c66bc036a83f06e54,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1760796639902169014,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efc965f-2bb9-4589-8896-270849ff244b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b6304f138a157686248517d9a4334e9f7e0a04eb4d75d3e8242c7d66099747,PodSandb
oxId:90e767d4c7dbaae662125240df9100ed2fadaf431f677002ca60219ca58ef7d4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1760796637960180053,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-65z6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad7f1cc5-6176-4f71-9c29-3fd9d9546f7b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:3781d3641f70c2afdd9e7cf33046996dcefa7ceeb31eaeb6735fe958ea81fbdf,PodSandboxId:2d23bcaba041603a7033e5364863b52ee33056bf513c91b93cbd051dc4ee50fb,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636160087491,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e3fb2c-05b7-448d-bca6-3438d70868b1,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb6d569a2a3f2ef99bf632b0e17f74e8f99944756e5338f36177afc9784250e,PodSandboxId:7a44187aa2259b4391883c3f4e9b9dfefc7c60831b7bfc9273715b7a8b6675b5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1760796636024422683,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-b9tnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028a732-94f8-46f5-8ade-adc72e44a92d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6267021fe47465dfb0a972ca3ac1853819fcb8ec9c4af79da3515676f56c70d,PodSandboxId:7483a2b2bce44deaa3b7126ad65266f9ccb9eb59517cc399fde2646bdce00e31,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796634343510547,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lz2l5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: edbb1e3e-09f2-4958-b943-de86e541c2ab,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e786527308546addc508c7f9fde815f3dbf888dbbd28417a6fda88b88fa8ab,PodSandboxId:19bb29e5d6915f98e1c622bd12dfd02a46541ba9d2922196d95c45d1eef03591,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1760796634154278160,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66fa96af-5215-410d-899c-8ee3de6c2691,},Annotations:map[string]string{io.kubernetes.container.h
ash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:405281ec9edfa02e6ef1722dec6adc497496544ed9e116c4827e07faa66e42b3,PodSandboxId:784fb9851d0e370b86d85cb15f009b0ada6ea2b7f21e505158415537390f7d3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760796631912253285,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nbrm2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e48f1e46-67fb-4c71-bc01-b2f3743345f0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c389fedf82c73101b96bb9331713ba0cf1fe89e497bb463f4a1a5c8f965331eb,PodSandboxId:f6cf7a6905b38496b0fb0dffcad88c191af9be4e2d42b30916a7239099dd25d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760796623404092240,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-kj8pr,io.kubernetes.pod.namespace: local-path-storage,io.kuberne
tes.pod.uid: b9e6b11c-bbb9-4e19-9cb4-ca24b2aa3018,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751b2df6a5bf4c3261a679f6e961086b9a7e8a0d308b47ba5a823ed41d50ff7c,PodSandboxId:e7adc46dd97a6e6351f075aad05529d7968ddcfdb815b441bff765545717c999,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760796621649083356,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bz8k2,io.kubernetes.pod.namespace: gadget,io.kubernet
es.pod.uid: 32f0a88f-aea2-4621-a5b1-df5a3fb86a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5d947b9ededdb0f9530cfb2606f9d20f027050a247e368207048d7856361,PodSandboxId:04626452678ece1669cf1b64aa42ec4e38880fec5bfbbb2efb6abcab66a2eba0,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760796611084064989,Labels:map[string]string{io.kubernetes.
container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2be3a2-f8a7-4762-a4a6-aeea42df7e21,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504,PodSandboxId:bf130a85fe68d5cdda719544aa9afd112627aeb7acb1df2c62daeedf486112a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1760796577983458040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f8bdeb-9db0-44f3-b3cb-8396901acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90350cf8ae05058e381c6f06dfaaa1b66c33001b294c94602cbb4601d22e5bc2,PodSandboxId:b439dd6e51abd6ee7156af98c543df3bcd516cd309de6b0b6fd934ae60d4579a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459
a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760796574525913819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c5cbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64430541-160f-413b-b21e-6636047a8859,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925,PodSandboxId:ba30da275bea105c47caa89fd0d4a924e96bd43b200434b972d0f1686c5cdb46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f
07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760796569075663973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9t6mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2cf3593-0ffc-49aa-ab5d-1ecf71d259cc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881,PodSandbo
xId:8fb6c60415fdaa40da442b8d93572f59350e86e5027e05f1e616ddc3e66d1895,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760796567868668763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ckpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ac992c-4401-40f5-93dd-7a525ec3b2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552,PodSandboxId:bfa6fdc1baf4d2d9eaa5d56358672
ee6314ea527df88bc7c5cfbb6d68599a772,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760796553601510681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4360d09804819a4ab0d1ffed7423947,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873a633e0ebfdc97218e103cd398dde37
7449c146a2b3d8affa3222d72e07fad,PodSandboxId:4b35987ede0428e0950b004d1104001ead21d6b6989238185c2fb74d3cf3bf44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760796553612924961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1348b107c675acfd26c3d687c91d60c5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6,PodSandboxId:b783fc0f686a0773f409244090fb0347fd53adfbe3110712527fc3d39b81e149,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760796553577778017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5086595138b36f6eb8ac54e83c6bc182,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab,PodSandboxId:c8fbc229d4f5f4b227bfc321c455f9928cc82e2099fb0746d33c7d9c893295f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760796553532990421,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-891059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97082571db3e60e44c3d60e99a384436,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"nam
e\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e824c242-bac1-42c9-9806-69580a7f5b7e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	80a91b61e6217       ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03                                        6 seconds ago       Running             headlamp                                 0                   a57ef3e4467d4       headlamp-6945c6f4d-67bwz
	a4019b2f5a82e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   871fa03a65061       busybox
	2d5e462bcd2b5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	e429add87fb79       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	0c154e6ad0036       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	34e42c0ad16a7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	90ce2976bee33       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             3 minutes ago       Running             controller                               0                   2f9eb14649244       ingress-nginx-controller-675c5ddd98-bphwz
	9830a2003573c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	8b41579872800       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   d23e703cbfeb7       csi-hostpath-resizer-0
	e6b6304f138a1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   90e767d4c7dba       csi-hostpathplugin-65z6z
	3781d3641f70c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   2d23bcaba0416       snapshot-controller-7d9fbc56b8-bzhfk
	9bb6d569a2a3f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   7a44187aa2259       snapshot-controller-7d9fbc56b8-b9tnq
	a6267021fe474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              patch                                    0                   7483a2b2bce44       ingress-nginx-admission-patch-lz2l5
	c8e7865273085       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   19bb29e5d6915       csi-hostpath-attacher-0
	405281ec9edfa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              create                                   0                   784fb9851d0e3       ingress-nginx-admission-create-nbrm2
	c389fedf82c73       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   f6cf7a6905b38       local-path-provisioner-648f6765c9-kj8pr
	751b2df6a5bf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            4 minutes ago       Running             gadget                                   0                   e7adc46dd97a6       gadget-bz8k2
	3faa5d947b9ed       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   04626452678ec       kube-ingress-dns-minikube
	da75007bac0f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   bf130a85fe68d       storage-provisioner
	90350cf8ae050       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   b439dd6e51abd       amd-gpu-device-plugin-c5cbb
	5b099b5b37807       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago       Running             coredns                                  0                   ba30da275bea1       coredns-66bc5c9577-9t6mk
	97e1670c81585       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago       Running             kube-proxy                               0                   8fb6c60415fda       kube-proxy-ckpzl
	873a633e0ebfd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   4b35987ede042       kube-controller-manager-addons-891059
	4f010fdc156cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   bfa6fdc1baf4d       etcd-addons-891059
	50cc3d2477595       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   b783fc0f686a0       kube-scheduler-addons-891059
	550e8ca214589       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   c8fbc229d4f5f       kube-apiserver-addons-891059
	
	
	==> coredns [5b099b5b37807cb6ddae926ed2ce7fd3b3113ee1520cb817da8f25923c16c925] <==
	[INFO] 10.244.0.8:38553 - 35504 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000072442s
	[INFO] 10.244.0.8:41254 - 10457 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126469s
	[INFO] 10.244.0.8:41254 - 10148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000351753s
	[INFO] 10.244.0.8:58812 - 14712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165201s
	[INFO] 10.244.0.8:58812 - 14408 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000227737s
	[INFO] 10.244.0.8:46072 - 17563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089989s
	[INFO] 10.244.0.8:46072 - 17331 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000357865s
	[INFO] 10.244.0.8:44214 - 24523 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103993s
	[INFO] 10.244.0.8:44214 - 24308 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319225s
	[INFO] 10.244.0.23:53101 - 38230 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000789741s
	[INFO] 10.244.0.23:39743 - 4637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014608s
	[INFO] 10.244.0.23:34680 - 45484 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000257617s
	[INFO] 10.244.0.23:57667 - 2834 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156321s
	[INFO] 10.244.0.23:49060 - 9734 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000228026s
	[INFO] 10.244.0.23:49380 - 40146 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011544s
	[INFO] 10.244.0.23:59610 - 60837 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001192659s
	[INFO] 10.244.0.23:43936 - 55741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001950004s
	[INFO] 10.244.0.28:45423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000594412s
	[INFO] 10.244.0.28:35326 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000279094s
	[INFO] 10.244.0.28:34121 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115216s
	[INFO] 10.244.0.28:43026 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000225891s
	[INFO] 10.244.0.28:58520 - 6 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000121233s
	[INFO] 10.244.0.28:39709 - 7 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000126579s
	[INFO] 10.244.0.28:46571 - 8 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000295561s
	[INFO] 10.244.0.28:34480 - 9 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104287s
	
	
	==> describe nodes <==
	Name:               addons-891059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-891059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-891059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-891059
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-891059"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:09:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-891059
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:14:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:13:25 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:13:25 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:13:25 +0000   Sat, 18 Oct 2025 14:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:13:25 +0000   Sat, 18 Oct 2025 14:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-891059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 372d92314fa4448095fc5052e6676096
	  System UUID:                372d9231-4fa4-4480-95fc-5052e6676096
	  Boot ID:                    7e38709f-8590-4225-8b4d-3bbac20f6c51
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  gadget                      gadget-bz8k2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  headlamp                    headlamp-6945c6f4d-67bwz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bphwz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m52s
	  kube-system                 amd-gpu-device-plugin-c5cbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-9t6mk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-65z6z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 etcd-addons-891059                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m6s
	  kube-system                 kube-apiserver-addons-891059                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-891059        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-ckpzl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-891059                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-b9tnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-bzhfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  local-path-storage          local-path-provisioner-648f6765c9-kj8pr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m55s  kube-proxy       
	  Normal  Starting                 5m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m6s   kubelet          Node addons-891059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s   kubelet          Node addons-891059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s   kubelet          Node addons-891059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m5s   kubelet          Node addons-891059 status is now: NodeReady
	  Normal  RegisteredNode           5m2s   node-controller  Node addons-891059 event: Registered Node addons-891059 in Controller
	
	
	==> dmesg <==
	[  +0.024674] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.704116] kauditd_printk_skb: 297 callbacks suppressed
	[  +0.252518] kauditd_printk_skb: 227 callbacks suppressed
	[  +0.620971] kauditd_printk_skb: 414 callbacks suppressed
	[ +15.304937] kauditd_printk_skb: 49 callbacks suppressed
	[Oct18 14:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.485780] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.577564] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.762881] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.526985] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.667244] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.038951] kauditd_printk_skb: 160 callbacks suppressed
	[  +5.632898] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.124721] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:11] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.104883] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000298] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.819366] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.221421] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.837047] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.423844] kauditd_printk_skb: 58 callbacks suppressed
	[Oct18 14:13] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [4f010fdc156cb398c84f19945fc8b9f186ef23cb554bce047cf0bdadc63ef552] <==
	{"level":"info","ts":"2025-10-18T14:10:27.789790Z","caller":"traceutil/trace.go:172","msg":"trace[1019503945] transaction","detail":"{read_only:false; response_revision:980; number_of_response:1; }","duration":"291.472583ms","start":"2025-10-18T14:10:27.498307Z","end":"2025-10-18T14:10:27.789779Z","steps":["trace[1019503945] 'process raft request'  (duration: 291.315936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.789826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.361325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.789858Z","caller":"traceutil/trace.go:172","msg":"trace[1466024528] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:979; }","duration":"117.444796ms","start":"2025-10-18T14:10:27.672405Z","end":"2025-10-18T14:10:27.789850Z","steps":["trace[1466024528] 'agreement among raft nodes before linearized reading'  (duration: 117.307687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:10:27.790385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.236345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:10:27.790510Z","caller":"traceutil/trace.go:172","msg":"trace[732980754] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:980; }","duration":"108.373321ms","start":"2025-10-18T14:10:27.682130Z","end":"2025-10-18T14:10:27.790503Z","steps":["trace[732980754] 'agreement among raft nodes before linearized reading'  (duration: 108.1351ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:31.360128Z","caller":"traceutil/trace.go:172","msg":"trace[1845619058] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"140.456007ms","start":"2025-10-18T14:10:31.219657Z","end":"2025-10-18T14:10:31.360113Z","steps":["trace[1845619058] 'process raft request'  (duration: 140.331758ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:10:46.208681Z","caller":"traceutil/trace.go:172","msg":"trace[1766959808] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"186.674963ms","start":"2025-10-18T14:10:46.021984Z","end":"2025-10-18T14:10:46.208659Z","steps":["trace[1766959808] 'process raft request'  (duration: 186.50291ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:11:02.952579Z","caller":"traceutil/trace.go:172","msg":"trace[1731516554] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"113.28639ms","start":"2025-10-18T14:11:02.839276Z","end":"2025-10-18T14:11:02.952562Z","steps":["trace[1731516554] 'read index received'  (duration: 113.240159ms)","trace[1731516554] 'applied index is now lower than readState.Index'  (duration: 45.276µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T14:11:02.953674Z","caller":"traceutil/trace.go:172","msg":"trace[374499777] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"131.03911ms","start":"2025-10-18T14:11:02.822625Z","end":"2025-10-18T14:11:02.953664Z","steps":["trace[374499777] 'process raft request'  (duration: 130.864849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:11:02.953956Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.682576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:11:02.958891Z","caller":"traceutil/trace.go:172","msg":"trace[2098939205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1198; }","duration":"119.626167ms","start":"2025-10-18T14:11:02.839251Z","end":"2025-10-18T14:11:02.958878Z","steps":["trace[2098939205] 'agreement among raft nodes before linearized reading'  (duration: 114.665108ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.804829Z","caller":"traceutil/trace.go:172","msg":"trace[38135400] linearizableReadLoop","detail":"{readStateIndex:1845; appliedIndex:1845; }","duration":"254.786987ms","start":"2025-10-18T14:14:17.550008Z","end":"2025-10-18T14:14:17.804795Z","steps":["trace[38135400] 'read index received'  (duration: 254.774829ms)","trace[38135400] 'applied index is now lower than readState.Index'  (duration: 11.099µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:17.805068Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.018833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805091Z","caller":"traceutil/trace.go:172","msg":"trace[1453244013] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1761; }","duration":"255.081798ms","start":"2025-10-18T14:14:17.550004Z","end":"2025-10-18T14:14:17.805086Z","steps":["trace[1453244013] 'agreement among raft nodes before linearized reading'  (duration: 254.990525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:17.805508Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.4057ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:17.805595Z","caller":"traceutil/trace.go:172","msg":"trace[926038607] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1762; }","duration":"133.500196ms","start":"2025-10-18T14:14:17.672087Z","end":"2025-10-18T14:14:17.805587Z","steps":["trace[926038607] 'agreement among raft nodes before linearized reading'  (duration: 133.363964ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:17.805922Z","caller":"traceutil/trace.go:172","msg":"trace[451226295] transaction","detail":"{read_only:false; response_revision:1762; number_of_response:1; }","duration":"260.563702ms","start":"2025-10-18T14:14:17.545349Z","end":"2025-10-18T14:14:17.805913Z","steps":["trace[451226295] 'process raft request'  (duration: 259.940194ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:14:23.347090Z","caller":"traceutil/trace.go:172","msg":"trace[355090838] linearizableReadLoop","detail":"{readStateIndex:1864; appliedIndex:1864; }","duration":"301.568388ms","start":"2025-10-18T14:14:23.045504Z","end":"2025-10-18T14:14:23.347073Z","steps":["trace[355090838] 'read index received'  (duration: 301.562884ms)","trace[355090838] 'applied index is now lower than readState.Index'  (duration: 4.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T14:14:23.347216Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.743884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347238Z","caller":"traceutil/trace.go:172","msg":"trace[954386242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1779; }","duration":"301.780363ms","start":"2025-10-18T14:14:23.045451Z","end":"2025-10-18T14:14:23.347231Z","steps":["trace[954386242] 'agreement among raft nodes before linearized reading'  (duration: 301.721286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347296Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.045431Z","time spent":"301.853987ms","remote":"127.0.0.1:53840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-18T14:14:23.347302Z","caller":"traceutil/trace.go:172","msg":"trace[648344144] transaction","detail":"{read_only:false; response_revision:1780; number_of_response:1; }","duration":"307.588862ms","start":"2025-10-18T14:14:23.039701Z","end":"2025-10-18T14:14:23.347290Z","steps":["trace[648344144] 'process raft request'  (duration: 307.402517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T14:14:23.347441Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T14:14:23.039679Z","time spent":"307.656367ms","remote":"127.0.0.1:53970","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" mod_revision:1752 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-36nbpcgspzmnrg7y5avwjcoroi\" > >"}
	{"level":"warn","ts":"2025-10-18T14:14:23.347489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.844351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T14:14:23.347507Z","caller":"traceutil/trace.go:172","msg":"trace[2122422757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1780; }","duration":"166.862778ms","start":"2025-10-18T14:14:23.180639Z","end":"2025-10-18T14:14:23.347502Z","steps":["trace[2122422757] 'agreement among raft nodes before linearized reading'  (duration: 166.829225ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:25 up 5 min,  0 users,  load average: 1.99, 1.73, 0.90
	Linux addons-891059 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [550e8ca214589028236bc3f3e98efbed492d3f84addbacedfb6929bee8541bab] <==
	W1018 14:09:53.453446       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 14:09:53.493977       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:09:53.500603       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:10:34.174347       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.174816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 14:10:34.174931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 14:10:34.177190       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:34.177355       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:34.177368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 14:10:41.344292       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.140.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.140.151:443: connect: connection refused" logger="UnhandledError"
	W1018 14:10:41.345235       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:10:41.349441       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 14:10:41.403792       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:11:09.006479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51796: use of closed network connection
	E1018 14:11:09.215206       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:51814: use of closed network connection
	I1018 14:11:36.964050       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:11:37.174177       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.128.177"}
	I1018 14:11:42.373806       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 14:12:52.429043       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.125.191"}
	
	
	==> kube-controller-manager [873a633e0ebfdc97218e103cd398dde377449c146a2b3d8affa3222d72e07fad] <==
	I1018 14:09:23.461755       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 14:09:23.462051       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:09:23.462733       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 14:09:23.462816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 14:09:23.464420       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:09:23.465969       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:09:23.466053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.467317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:23.471785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:09:23.473104       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:09:23.507962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:09:23.507980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:09:23.507988       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1018 14:09:32.271939       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:09:53.430333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:09:53.430686       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:09:53.430794       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:09:53.479595       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:09:53.486163       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:09:53.531732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:09:53.587475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:10:23.541245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:10:23.598329       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:11:22.617268       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1018 14:12:49.739835       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [97e1670c81585e6415c369e52af3deebb586e548711c359ac4fe22d13bfbf881] <==
	I1018 14:09:29.078784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:09:29.179875       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:09:29.180064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1018 14:09:29.180168       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:09:29.435752       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:09:29.435855       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:09:29.435886       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:09:29.458405       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:09:29.459486       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:09:29.459499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:09:29.471972       1 config.go:200] "Starting service config controller"
	I1018 14:09:29.472688       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:09:29.472718       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:09:29.472724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:09:29.472739       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:09:29.472745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:09:29.474046       1 config.go:309] "Starting node config controller"
	I1018 14:09:29.474055       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:09:29.474060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:09:29.573160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:09:29.573457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:09:29.573493       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50cc3d2477595030b199dee8a2c8a4cb8f2f508dbbe7bdf89f535de0d3d1d6b6] <==
	E1018 14:09:16.517030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:16.517067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:09:16.517111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:16.517151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:09:16.517190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:16.517227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:16.517305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:16.517334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:16.517377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:16.517437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:16.524951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:09:17.315107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:09:17.350735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:09:17.351152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:09:17.351207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:09:17.375382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:09:17.392110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:09:17.451119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:09:17.490015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:09:17.582674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:09:17.653362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:09:17.692474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:09:17.761718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:09:17.762010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 14:09:18.995741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:13:27 addons-891059 kubelet[1503]: E1018 14:13:27.959090    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:13:27 addons-891059 kubelet[1503]: E1018 14:13:27.959146    1503 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:13:27 addons-891059 kubelet[1503]: E1018 14:13:27.959401    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(3922f28b-1c3b-4a38-b461-c5f57823b438): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:13:27 addons-891059 kubelet[1503]: E1018 14:13:27.959442    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:13:28 addons-891059 kubelet[1503]: E1018 14:13:28.666482    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3922f28b-1c3b-4a38-b461-c5f57823b438"
	Oct 18 14:13:29 addons-891059 kubelet[1503]: E1018 14:13:29.969307    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796809968752573  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:29 addons-891059 kubelet[1503]: E1018 14:13:29.969335    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796809968752573  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:38 addons-891059 kubelet[1503]: I1018 14:13:38.472686    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:13:39 addons-891059 kubelet[1503]: E1018 14:13:39.973207    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796819972827481  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:39 addons-891059 kubelet[1503]: E1018 14:13:39.973259    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796819972827481  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:45 addons-891059 kubelet[1503]: I1018 14:13:45.473483    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c5cbb" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:13:49 addons-891059 kubelet[1503]: E1018 14:13:49.976699    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796829976260478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:49 addons-891059 kubelet[1503]: E1018 14:13:49.976744    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796829976260478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:59 addons-891059 kubelet[1503]: E1018 14:13:59.980849    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796839980342197  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:13:59 addons-891059 kubelet[1503]: E1018 14:13:59.981319    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796839980342197  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:14:09 addons-891059 kubelet[1503]: E1018 14:14:09.984420    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796849983848935  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:14:09 addons-891059 kubelet[1503]: E1018 14:14:09.984613    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796849983848935  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:506670}  inodes_used:{value:181}}"
	Oct 18 14:14:12 addons-891059 kubelet[1503]: E1018 14:14:12.969467    1503 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 14:14:12 addons-891059 kubelet[1503]: E1018 14:14:12.969517    1503 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 14:14:12 addons-891059 kubelet[1503]: E1018 14:14:12.969795    1503 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(d6bcb3d3-06c5-4ec8-8496-cf302660e01d): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:14:12 addons-891059 kubelet[1503]: E1018 14:14:12.969889    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	Oct 18 14:14:18 addons-891059 kubelet[1503]: I1018 14:14:18.225309    1503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-6945c6f4d-67bwz" podStartSLOduration=1.441638244 podStartE2EDuration="1m26.225276837s" podCreationTimestamp="2025-10-18 14:12:52 +0000 UTC" firstStartedPulling="2025-10-18 14:12:53.119469669 +0000 UTC m=+213.797011720" lastFinishedPulling="2025-10-18 14:14:17.903108249 +0000 UTC m=+298.580650313" observedRunningTime="2025-10-18 14:14:18.222225262 +0000 UTC m=+298.899767316" watchObservedRunningTime="2025-10-18 14:14:18.225276837 +0000 UTC m=+298.902818908"
	Oct 18 14:14:19 addons-891059 kubelet[1503]: E1018 14:14:19.988733    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760796859988087336  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:14:19 addons-891059 kubelet[1503]: E1018 14:14:19.988787    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760796859988087336  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:520517}  inodes_used:{value:186}}"
	Oct 18 14:14:24 addons-891059 kubelet[1503]: E1018 14:14:24.482461    1503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d6bcb3d3-06c5-4ec8-8496-cf302660e01d"
	
	
	==> storage-provisioner [da75007bac0f47603bb3540fd8ae444427639a840b26793c26a279445acc6504] <==
	W1018 14:13:59.414822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:01.419372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:01.424793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:03.428221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:03.433785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:05.437751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:05.453927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:07.457680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:07.466437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:09.471068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:09.477464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:11.481924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:11.491926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:13.496518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:13.503668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:15.511850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:15.532646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:17.539214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:17.809002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:19.813048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:19.821677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:21.826690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:21.838944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:23.848887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:14:23.856010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-891059 -n addons-891059
helpers_test.go:269: (dbg) Run:  kubectl --context addons-891059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path headlamp-6945c6f4d-67bwz ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path headlamp-6945c6f4d-67bwz ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path headlamp-6945c6f4d-67bwz ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1 (94.911449ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrm2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrm2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m49s                default-scheduler  Successfully assigned default/nginx to addons-891059
	  Warning  Failed     59s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    58s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     58s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x2 over 2m49s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48qc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-48qc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m59s                default-scheduler  Successfully assigned default/task-pv-pod to addons-891059
	  Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    90s                  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     90s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    76s (x2 over 2m59s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-891059/192.168.39.100
	Start Time:       Sat, 18 Oct 2025 14:11:23 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2cp2j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2cp2j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/test-local-path to addons-891059
	  Warning  Failed     2m1s                 kubelet            Failed to pull image "busybox:stable": initializing source docker://busybox:stable: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    106s (x2 over 3m2s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     14s (x2 over 2m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     14s                  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x2 over 2m)      kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     2s (x2 over 2m)      kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-6945c6f4d-67bwz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-nbrm2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lz2l5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-891059 describe pod nginx task-pv-pod test-local-path headlamp-6945c6f4d-67bwz ingress-nginx-admission-create-nbrm2 ingress-nginx-admission-patch-lz2l5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.981698547s)
--- FAIL: TestAddons/parallel/LocalPath (231.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900196 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900196 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900196 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900196 --alsologtostderr -v=1] stderr:
I1018 14:32:20.400230 1770906 out.go:360] Setting OutFile to fd 1 ...
I1018 14:32:20.400524 1770906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:32:20.400535 1770906 out.go:374] Setting ErrFile to fd 2...
I1018 14:32:20.400539 1770906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:32:20.400760 1770906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:32:20.401066 1770906 mustload.go:65] Loading cluster: functional-900196
I1018 14:32:20.401421 1770906 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:32:20.401792 1770906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:32:20.401855 1770906 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:32:20.416120 1770906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45609
I1018 14:32:20.416678 1770906 main.go:141] libmachine: () Calling .GetVersion
I1018 14:32:20.417190 1770906 main.go:141] libmachine: Using API Version  1
I1018 14:32:20.417214 1770906 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:32:20.417633 1770906 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:32:20.417876 1770906 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:32:20.419422 1770906 host.go:66] Checking if "functional-900196" exists ...
I1018 14:32:20.419726 1770906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:32:20.419773 1770906 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:32:20.433731 1770906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
I1018 14:32:20.434160 1770906 main.go:141] libmachine: () Calling .GetVersion
I1018 14:32:20.434589 1770906 main.go:141] libmachine: Using API Version  1
I1018 14:32:20.434606 1770906 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:32:20.435097 1770906 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:32:20.435297 1770906 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:32:20.435528 1770906 api_server.go:166] Checking apiserver status ...
I1018 14:32:20.435596 1770906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 14:32:20.435628 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:32:20.438750 1770906 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:32:20.439156 1770906 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:32:20.439191 1770906 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:32:20.439332 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:32:20.439501 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:32:20.439692 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:32:20.439815 1770906 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:32:20.539867 1770906 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5837/cgroup
W1018 14:32:20.559732 1770906 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5837/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1018 14:32:20.559800 1770906 ssh_runner.go:195] Run: ls
I1018 14:32:20.567640 1770906 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8441/healthz ...
I1018 14:32:20.574360 1770906 api_server.go:279] https://192.168.39.34:8441/healthz returned 200:
ok
W1018 14:32:20.574407 1770906 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1018 14:32:20.574569 1770906 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:32:20.574581 1770906 addons.go:69] Setting dashboard=true in profile "functional-900196"
I1018 14:32:20.574588 1770906 addons.go:238] Setting addon dashboard=true in "functional-900196"
I1018 14:32:20.574616 1770906 host.go:66] Checking if "functional-900196" exists ...
I1018 14:32:20.574874 1770906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:32:20.574918 1770906 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:32:20.589257 1770906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
I1018 14:32:20.589803 1770906 main.go:141] libmachine: () Calling .GetVersion
I1018 14:32:20.590357 1770906 main.go:141] libmachine: Using API Version  1
I1018 14:32:20.590394 1770906 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:32:20.590757 1770906 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:32:20.591366 1770906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:32:20.591419 1770906 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:32:20.605746 1770906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
I1018 14:32:20.606268 1770906 main.go:141] libmachine: () Calling .GetVersion
I1018 14:32:20.606775 1770906 main.go:141] libmachine: Using API Version  1
I1018 14:32:20.606796 1770906 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:32:20.607126 1770906 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:32:20.607335 1770906 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:32:20.609288 1770906 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:32:20.611237 1770906 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1018 14:32:20.612576 1770906 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1018 14:32:20.613810 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1018 14:32:20.613829 1770906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1018 14:32:20.613849 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:32:20.617626 1770906 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:32:20.618021 1770906 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:32:20.618056 1770906 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:32:20.618195 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:32:20.618401 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:32:20.618578 1770906 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:32:20.618730 1770906 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:32:20.718395 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1018 14:32:20.718424 1770906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1018 14:32:20.743167 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1018 14:32:20.743208 1770906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1018 14:32:20.768977 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1018 14:32:20.769016 1770906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1018 14:32:20.795234 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1018 14:32:20.795261 1770906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1018 14:32:20.818825 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1018 14:32:20.818861 1770906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1018 14:32:20.842018 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1018 14:32:20.842046 1770906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1018 14:32:20.864479 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1018 14:32:20.864508 1770906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1018 14:32:20.888225 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1018 14:32:20.888254 1770906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1018 14:32:20.911135 1770906 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1018 14:32:20.911164 1770906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1018 14:32:20.934553 1770906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1018 14:32:21.666369 1770906 main.go:141] libmachine: Making call to close driver server
I1018 14:32:21.666404 1770906 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:32:21.666771 1770906 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:32:21.666793 1770906 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:32:21.666808 1770906 main.go:141] libmachine: Making call to close driver server
I1018 14:32:21.666835 1770906 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:32:21.667095 1770906 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
I1018 14:32:21.667143 1770906 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:32:21.667154 1770906 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:32:21.668708 1770906 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-900196 addons enable metrics-server

                                                
                                                
I1018 14:32:21.670189 1770906 addons.go:201] Writing out "functional-900196" config to set dashboard=true...
W1018 14:32:21.670562 1770906 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1018 14:32:21.671497 1770906 kapi.go:59] client config for functional-900196: &rest.Config{Host:"https://192.168.39.34:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1018 14:32:21.672215 1770906 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1018 14:32:21.672244 1770906 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1018 14:32:21.672252 1770906 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1018 14:32:21.672261 1770906 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1018 14:32:21.672271 1770906 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1018 14:32:21.683079 1770906 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  f16e224a-a276-4abd-80ec-7a288eefcdf8 1250 0 2025-10-18 14:32:21 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-18 14:32:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.45.148,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.45.148],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1018 14:32:21.683280 1770906 out.go:285] * Launching proxy ...
* Launching proxy ...
I1018 14:32:21.683381 1770906 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-900196 proxy --port 36195]
I1018 14:32:21.683759 1770906 dashboard.go:157] Waiting for kubectl to output host:port ...
I1018 14:32:21.733147 1770906 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1018 14:32:21.733253 1770906 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1018 14:32:21.742218 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16da225c-1553-4bae-9480-1d529a381730] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00084ccc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822000 TLS:<nil>}
I1018 14:32:21.742292 1770906 retry.go:31] will retry after 105.405µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.746774 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8409789-0f6e-484e-8005-592eeb09e697] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167a640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000333680 TLS:<nil>}
I1018 14:32:21.746846 1770906 retry.go:31] will retry after 182.77µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.751637 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ce9da1e-bdc9-44f0-8e4b-74d65af34449] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e88c0 TLS:<nil>}
I1018 14:32:21.751695 1770906 retry.go:31] will retry after 169.142µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.756716 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09ce09b9-cb6c-47bd-85fa-61bb8ae92b28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00084cf00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822140 TLS:<nil>}
I1018 14:32:21.756767 1770906 retry.go:31] will retry after 339.804µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.761328 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c38ee1a-99ee-40d6-a317-60fc4b8a7b38] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000333a40 TLS:<nil>}
I1018 14:32:21.761395 1770906 retry.go:31] will retry after 589.783µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.766546 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26adbbe7-f414-452c-b65a-650bd1427d48] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167a740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822280 TLS:<nil>}
I1018 14:32:21.766601 1770906 retry.go:31] will retry after 768.95µs: Temporary Error: unexpected response code: 503
I1018 14:32:21.770522 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6ed607a-fc85-4322-a398-adca581a3bf5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8a00 TLS:<nil>}
I1018 14:32:21.770571 1770906 retry.go:31] will retry after 1.291674ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.777696 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6814a8a5-4637-44c4-a83a-1d952599fdaf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00084d000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018223c0 TLS:<nil>}
I1018 14:32:21.777754 1770906 retry.go:31] will retry after 1.377057ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.786704 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9f1c8cda-1f01-4870-b48f-3b8ee939d819] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000333b80 TLS:<nil>}
I1018 14:32:21.786760 1770906 retry.go:31] will retry after 3.363378ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.794547 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a4165471-f5d0-456c-9e4d-bc7f01374716] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167a880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822500 TLS:<nil>}
I1018 14:32:21.794612 1770906 retry.go:31] will retry after 4.592415ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.803098 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35391399-5771-48d5-9168-48313d02917e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00084d240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8b40 TLS:<nil>}
I1018 14:32:21.803152 1770906 retry.go:31] will retry after 6.628278ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.813643 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b6db941-148d-4513-ba9e-db95f668c2e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000333cc0 TLS:<nil>}
I1018 14:32:21.813699 1770906 retry.go:31] will retry after 5.027453ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.821441 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf382116-7b16-40f1-959a-fff3c6c8d51d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167a980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822640 TLS:<nil>}
I1018 14:32:21.821490 1770906 retry.go:31] will retry after 11.56355ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.836952 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97eb0a09-4a6e-4299-afc3-a5da8171f2a8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8dc0 TLS:<nil>}
I1018 14:32:21.837025 1770906 retry.go:31] will retry after 12.984145ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.853355 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[313a454d-01d0-4540-80f5-a78cf8bc894f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167aa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822780 TLS:<nil>}
I1018 14:32:21.853428 1770906 retry.go:31] will retry after 14.681895ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.871688 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0b57198-f21e-4a4d-9a2d-ce684000e665] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00084d3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8f00 TLS:<nil>}
I1018 14:32:21.871777 1770906 retry.go:31] will retry after 25.529014ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.902033 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fffaf821-7835-4b79-a6d2-168cf7fb5134] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00182a6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000333e00 TLS:<nil>}
I1018 14:32:21.902104 1770906 retry.go:31] will retry after 72.43913ms: Temporary Error: unexpected response code: 503
I1018 14:32:21.982099 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ead97a57-6577-431e-b8ed-4ab05a23897b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:21 GMT]] Body:0xc00167ab80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018228c0 TLS:<nil>}
I1018 14:32:21.982199 1770906 retry.go:31] will retry after 107.826351ms: Temporary Error: unexpected response code: 503
I1018 14:32:22.096259 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61f8dc4a-106f-44fc-9ceb-6341807a1071] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:22 GMT]] Body:0xc00182a7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9040 TLS:<nil>}
I1018 14:32:22.096358 1770906 retry.go:31] will retry after 113.431278ms: Temporary Error: unexpected response code: 503
I1018 14:32:22.215485 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[900794ef-4eaf-4bd7-ba9b-263222228f1b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:22 GMT]] Body:0xc00182a880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822a00 TLS:<nil>}
I1018 14:32:22.215576 1770906 retry.go:31] will retry after 234.282633ms: Temporary Error: unexpected response code: 503
I1018 14:32:22.453373 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b48e2e6-70cc-483b-a1ad-5b7211a2a02d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:22 GMT]] Body:0xc00084d640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9180 TLS:<nil>}
I1018 14:32:22.453438 1770906 retry.go:31] will retry after 370.485569ms: Temporary Error: unexpected response code: 503
I1018 14:32:22.827800 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0c9048ee-6af8-4457-97d8-676f0180aa8c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:22 GMT]] Body:0xc00182a900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc000 TLS:<nil>}
I1018 14:32:22.827870 1770906 retry.go:31] will retry after 459.412729ms: Temporary Error: unexpected response code: 503
I1018 14:32:23.291368 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[954011b9-2e5a-45ca-bdfa-ae65e9890c05] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:23 GMT]] Body:0xc00167ad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822b40 TLS:<nil>}
I1018 14:32:23.291465 1770906 retry.go:31] will retry after 511.090611ms: Temporary Error: unexpected response code: 503
I1018 14:32:23.806604 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94bf7d7c-8dc3-4abd-973d-f4efee91b5cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:23 GMT]] Body:0xc00084d800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e92c0 TLS:<nil>}
I1018 14:32:23.806717 1770906 retry.go:31] will retry after 1.663695729s: Temporary Error: unexpected response code: 503
I1018 14:32:25.474129 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a61ca7e8-7785-4ed7-ba89-abe5ac21f95c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:25 GMT]] Body:0xc00182aa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc140 TLS:<nil>}
I1018 14:32:25.474212 1770906 retry.go:31] will retry after 1.298676775s: Temporary Error: unexpected response code: 503
I1018 14:32:26.777030 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81a67d2a-9825-47bd-b1f5-ee90c94e4b8a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:26 GMT]] Body:0xc00167ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822c80 TLS:<nil>}
I1018 14:32:26.777110 1770906 retry.go:31] will retry after 3.719110252s: Temporary Error: unexpected response code: 503
I1018 14:32:30.500192 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c6db5bf-3879-41b6-a82e-ab2354155bda] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:30 GMT]] Body:0xc00084d940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001760000 TLS:<nil>}
I1018 14:32:30.500258 1770906 retry.go:31] will retry after 3.415524481s: Temporary Error: unexpected response code: 503
I1018 14:32:33.921314 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0de8009-b2cf-4990-ba02-202cf301f830] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:33 GMT]] Body:0xc00084db00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001822dc0 TLS:<nil>}
I1018 14:32:33.921432 1770906 retry.go:31] will retry after 5.124658286s: Temporary Error: unexpected response code: 503
I1018 14:32:39.051647 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0930708-7afa-4d01-aedc-1351899b38ca] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:39 GMT]] Body:0xc00167af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc280 TLS:<nil>}
I1018 14:32:39.051727 1770906 retry.go:31] will retry after 7.628621785s: Temporary Error: unexpected response code: 503
I1018 14:32:46.684370 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71fcf118-d8cf-4989-918f-7a4cd8d29b10] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:32:46 GMT]] Body:0xc00167b000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001760140 TLS:<nil>}
I1018 14:32:46.684454 1770906 retry.go:31] will retry after 13.965181003s: Temporary Error: unexpected response code: 503
I1018 14:33:00.653904 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96d7650e-8c0d-4d53-9b6a-73f876cd4187] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:00 GMT]] Body:0xc00182abc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc3c0 TLS:<nil>}
I1018 14:33:00.653980 1770906 retry.go:31] will retry after 20.130941883s: Temporary Error: unexpected response code: 503
I1018 14:33:20.789685 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3dc57503-073c-4c67-a5b1-7537040cbc8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:20 GMT]] Body:0xc00167b080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc500 TLS:<nil>}
I1018 14:33:20.789764 1770906 retry.go:31] will retry after 32.992103377s: Temporary Error: unexpected response code: 503
I1018 14:33:53.787411 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[605bb2d5-0440-4320-8ea5-7084c8e4bd29] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:53 GMT]] Body:0xc00182ac40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc640 TLS:<nil>}
I1018 14:33:53.787479 1770906 retry.go:31] will retry after 1m4.072725868s: Temporary Error: unexpected response code: 503
I1018 14:34:57.865918 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aff73d6b-47c5-4495-8a71-eec8ae237301] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:34:57 GMT]] Body:0xc00084c140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc780 TLS:<nil>}
I1018 14:34:57.866003 1770906 retry.go:31] will retry after 1m12.150739089s: Temporary Error: unexpected response code: 503
I1018 14:36:10.024372 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e3c2fed-6931-47c8-ada7-e587578b1e62] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:36:10 GMT]] Body:0xc0007b0100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bc8c0 TLS:<nil>}
I1018 14:36:10.024470 1770906 retry.go:31] will retry after 39.031364748s: Temporary Error: unexpected response code: 503
I1018 14:36:49.060461 1770906 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f52fb9e-a443-4210-b665-d96688a4a1e9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:36:49 GMT]] Body:0xc00167a080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017bca00 TLS:<nil>}
I1018 14:36:49.060562 1770906 retry.go:31] will retry after 1m23.348743958s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-900196 -n functional-900196
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs -n 25: (1.584602123s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-900196 service --namespace=default --https --url hello-node                                                                                       │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ update-context │ functional-900196 update-context --alsologtostderr -v=2                                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ update-context │ functional-900196 update-context --alsologtostderr -v=2                                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ service        │ functional-900196 service hello-node --url --format={{.IP}}                                                                                                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ update-context │ functional-900196 update-context --alsologtostderr -v=2                                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ service        │ functional-900196 service hello-node --url                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image save kicbase/echo-server:functional-900196 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image rm kicbase/echo-server:functional-900196 --alsologtostderr                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image save --daemon kicbase/echo-server:functional-900196 --alsologtostderr                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls --format yaml --alsologtostderr                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ ssh            │ functional-900196 ssh pgrep buildkitd                                                                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ image          │ functional-900196 image ls --format short --alsologtostderr                                                                                                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image build -t localhost/my-image:functional-900196 testdata/build --alsologtostderr                                                       │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls --format json --alsologtostderr                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls --format table --alsologtostderr                                                                                                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-900196 image ls                                                                                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:32:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:32:20.260818 1770878 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:32:20.261074 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261085 1770878 out.go:374] Setting ErrFile to fd 2...
	I1018 14:32:20.261090 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261277 1770878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:32:20.261753 1770878 out.go:368] Setting JSON to false
	I1018 14:32:20.262755 1770878 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22488,"bootTime":1760775452,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:32:20.262872 1770878 start.go:141] virtualization: kvm guest
	I1018 14:32:20.264871 1770878 out.go:179] * [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:32:20.266558 1770878 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:32:20.266576 1770878 notify.go:220] Checking for updates...
	I1018 14:32:20.268996 1770878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:32:20.270583 1770878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:32:20.271947 1770878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:32:20.276153 1770878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:32:20.277526 1770878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:32:20.279077 1770878 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:32:20.279476 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.279547 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.293619 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I1018 14:32:20.294123 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.294734 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.294763 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.295134 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.295334 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.295663 1770878 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:32:20.296029 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.296083 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.310256 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I1018 14:32:20.310819 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.311405 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.311440 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.311890 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.312119 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.344597 1770878 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 14:32:20.345696 1770878 start.go:305] selected driver: kvm2
	I1018 14:32:20.345710 1770878 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.345818 1770878 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:32:20.346798 1770878 cni.go:84] Creating CNI manager for ""
	I1018 14:32:20.346852 1770878 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:32:20.346901 1770878 start.go:349] cluster config:
	{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.348722 1770878 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.228844970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798241228817330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ebbe5bd-3315-4746-a634-f2f395be428f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.229787162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b9cc508-02e0-47d7-a8c2-1a2bc43f0fa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.230096780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b9cc508-02e0-47d7-a8c2-1a2bc43f0fa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.230424930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b9cc508-02e0-47d7-a8c2-1a2bc43f0fa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.284018604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=429ae745-a8ae-4a24-b7bd-3f85a4fd6f96 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.284111438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=429ae745-a8ae-4a24-b7bd-3f85a4fd6f96 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.285854183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=291b667d-132d-4e36-a00b-32d1919be14c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.286480705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798241286454885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=291b667d-132d-4e36-a00b-32d1919be14c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.287049466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a25cb4b-ebeb-4caa-bfc2-bdc998912c5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.287107773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a25cb4b-ebeb-4caa-bfc2-bdc998912c5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.287371776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a25cb4b-ebeb-4caa-bfc2-bdc998912c5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.331240251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f676fe12-e37d-470f-8b69-10f9d0a735d0 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.331312639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f676fe12-e37d-470f-8b69-10f9d0a735d0 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.333866806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c1e742c-a1d8-4386-882b-68d746bd1622 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.334911929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798241334884794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c1e742c-a1d8-4386-882b-68d746bd1622 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.336561422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5028137f-8a99-463b-8cd1-27e2c12bbd61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.336620686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5028137f-8a99-463b-8cd1-27e2c12bbd61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.336933319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5028137f-8a99-463b-8cd1-27e2c12bbd61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.377845206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=957a5756-cab1-4043-a8e4-28cfa427e1e8 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.377940213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=957a5756-cab1-4043-a8e4-28cfa427e1e8 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.379429227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6aca6678-9e72-48cd-b9b9-5eb2e2f00976 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.380208341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798241380181485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6aca6678-9e72-48cd-b9b9-5eb2e2f00976 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.380786536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1a3e529-6439-4e8d-9f44-bc23bdeecf4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.380837527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1a3e529-6439-4e8d-9f44-bc23bdeecf4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:37:21 functional-900196 crio[5303]: time="2025-10-18 14:37:21.381120483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1a3e529-6439-4e8d-9f44-bc23bdeecf4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a6150c71b2ab       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   462f63cd1cd9b       busybox-mount
	8ef6272e34711       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 minutes ago      Running             coredns                   2                   6c396b3a6d33f       coredns-66bc5c9577-7m2x4
	08267a0026df9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      13 minutes ago      Running             kube-proxy                2                   643097cfed919       kube-proxy-lwq2l
	0e1bccb3b64c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   1915389128f2c       storage-provisioner
	8dbff326b6cc8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      13 minutes ago      Running             kube-controller-manager   2                   d89012bc5dd1a       kube-controller-manager-functional-900196
	aedc2a498839e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      13 minutes ago      Running             etcd                      2                   3f0d73a3a9730       etcd-functional-900196
	b07ae915d18fc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      13 minutes ago      Running             kube-scheduler            2                   5dd0badd5d559       kube-scheduler-functional-900196
	53113ba9ccc6d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      13 minutes ago      Running             kube-apiserver            0                   a35ae590fec12       kube-apiserver-functional-900196
	88dbdf96d71bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 minutes ago      Exited              coredns                   1                   a31dcfcaadf59       coredns-66bc5c9577-7m2x4
	c5b51b4a4c799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 minutes ago      Exited              storage-provisioner       1                   d6fe824678880       storage-provisioner
	5ee1e13dc39ac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      14 minutes ago      Exited              kube-proxy                1                   58706c3ba9833       kube-proxy-lwq2l
	e89892ddae1dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      14 minutes ago      Exited              kube-scheduler            1                   e0e939baaa67d       kube-scheduler-functional-900196
	6c0794ff6e8e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      14 minutes ago      Exited              etcd                      1                   50b972f1c5236       etcd-functional-900196
	8a1ba16847db0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      14 minutes ago      Exited              kube-controller-manager   1                   88e38926414dc       kube-controller-manager-functional-900196
	
	
	==> coredns [88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42586 - 65064 "HINFO IN 7206085342544834509.5779663432164893704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097798211s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44491 - 9280 "HINFO IN 4407530105380212382.4237632423946435234. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081412794s
	
	
	==> describe nodes <==
	Name:               functional-900196
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-900196
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-900196
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_22_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:22:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-900196
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:37:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:35:08 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:35:08 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:35:08 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:35:08 +0000   Sat, 18 Oct 2025 14:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    functional-900196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9709b49470bb44b2a2d3964a71bb675f
	  System UUID:                9709b494-70bb-44b2-a2d3-964a71bb675f
	  Boot ID:                    07efcc6d-7a9c-407c-bc19-bf481d85f1cc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9f59p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-node-connect-7d85dfc575-dd4gd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     mysql-5bb876957f-lc247                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    12m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-7m2x4                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-functional-900196                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-functional-900196              250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-functional-900196     200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-lwq2l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-functional-900196              100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-kfk2q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mbxqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeReady                14m                kubelet          Node functional-900196 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000060] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005311] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.172763] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093384] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.140696] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.449328] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.072738] kauditd_printk_skb: 214 callbacks suppressed
	[Oct18 14:23] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.565209] kauditd_printk_skb: 176 callbacks suppressed
	[ +13.742304] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.110846] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.037829] kauditd_printk_skb: 241 callbacks suppressed
	[Oct18 14:24] kauditd_printk_skb: 165 callbacks suppressed
	[  +4.839368] kauditd_printk_skb: 116 callbacks suppressed
	[  +1.092432] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000023] kauditd_printk_skb: 74 callbacks suppressed
	[ +25.947836] kauditd_printk_skb: 26 callbacks suppressed
	[Oct18 14:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.150915] kauditd_printk_skb: 25 callbacks suppressed
	[Oct18 14:33] kauditd_printk_skb: 74 callbacks suppressed
	[Oct18 14:34] crun[9260]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44] <==
	{"level":"warn","ts":"2025-10-18T14:23:19.002339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.014970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.015271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.026883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.036277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.044009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.126022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:23:43.632803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:23:43.632873Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	{"level":"error","ts":"2025-10-18T14:23:43.632953Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.719582Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c39268f2da6496d","current-leader-member-id":"6c39268f2da6496d"}
	{"level":"info","ts":"2025-10-18T14:23:43.719736Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:23:43.719767Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:23:43.719987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720085Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720091Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723000Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"error","ts":"2025-10-18T14:23:43.723081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723126Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"info","ts":"2025-10-18T14:23:43.723144Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	
	
	==> etcd [aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0] <==
	{"level":"warn","ts":"2025-10-18T14:24:03.393411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.403553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.411920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.423842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.450107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.475855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.482214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.494263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.519912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.543096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.561736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.582286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.599796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.634238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.669265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.680518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.696148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.708272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.732117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.768874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.789233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.841018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:34:02.646581Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":984}
	{"level":"info","ts":"2025-10-18T14:34:02.657702Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":984,"took":"10.627386ms","hash":3562947326,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-18T14:34:02.657829Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3562947326,"revision":984,"compact-revision":-1}
	
	
	==> kernel <==
	 14:37:21 up 15 min,  0 users,  load average: 0.23, 0.26, 0.23
	Linux functional-900196 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790] <==
	I1018 14:24:04.719486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 14:24:04.728358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:24:04.729330       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 14:24:04.729605       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 14:24:04.729863       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1018 14:24:04.734364       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 14:24:04.736132       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 14:24:04.740621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:24:04.751733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:24:05.530077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 14:24:06.464533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:24:06.515450       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:24:06.545118       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:24:06.554419       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:24:08.049198       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:24:08.334307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:24:08.437412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:24:20.264564       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.161.77"}
	I1018 14:24:24.494152       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.89.83"}
	I1018 14:24:26.055389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.130.8"}
	I1018 14:24:26.172816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.93.197"}
	I1018 14:32:21.327579       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:32:21.622411       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.45.148"}
	I1018 14:32:21.643738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.131.141"}
	I1018 14:34:04.657900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70] <==
	I1018 14:23:23.227105       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 14:23:23.227092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:23:23.228466       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 14:23:23.228646       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:23:23.230757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.230788       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:23:23.230794       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:23:23.234954       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:23:23.236092       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:23:23.237277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:23:23.237366       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:23:23.237450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-900196"
	I1018 14:23:23.237498       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:23:23.239047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.252836       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:23:23.255972       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:23:23.259487       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:23:23.264319       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:23:23.271438       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 14:23:23.275470       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:23:23.275871       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:23:23.277193       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 14:23:23.279880       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:23:23.279907       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:23:23.292993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912] <==
	I1018 14:24:08.082577       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:24:08.082769       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:24:08.082866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:24:08.085006       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:24:08.086420       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:24:08.090627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:24:08.090734       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:24:08.090767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:24:08.091261       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 14:24:08.095383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 14:24:08.097339       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:24:08.098717       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:24:08.098851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:24:08.099335       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:24:08.099383       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:24:08.099390       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:24:08.099395       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:24:08.111890       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:24:08.115418       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1018 14:32:21.454866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.460410       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.470427       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.481978       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.482304       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.500231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784] <==
	I1018 14:24:05.611269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:24:05.714472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:24:05.714927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:24:05.716492       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:24:05.812046       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:24:05.812520       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:24:05.812622       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:24:05.848718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:24:05.849357       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:24:05.849554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:05.857884       1 config.go:200] "Starting service config controller"
	I1018 14:24:05.858034       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:24:05.858120       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:24:05.858304       1 config.go:309] "Starting node config controller"
	I1018 14:24:05.858382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:24:05.858409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:24:05.859956       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:24:05.860073       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:24:05.858159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:24:05.959014       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:24:05.961218       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:24:05.961404       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710] <==
	I1018 14:23:21.305079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:23:21.407136       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:23:21.407229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:23:21.407293       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:23:21.487588       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:23:21.487862       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:23:21.487981       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:23:21.506483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:23:21.508268       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:23:21.508285       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:21.521297       1 config.go:200] "Starting service config controller"
	I1018 14:23:21.531578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:23:21.527226       1 config.go:309] "Starting node config controller"
	I1018 14:23:21.531853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:23:21.531859       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:23:21.530548       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:23:21.531866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:23:21.530113       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:23:21.532475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:23:21.632482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:23:21.632640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:23:21.632695       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d] <==
	I1018 14:24:04.076446       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:24:04.644569       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:24:04.644616       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:24:04.644625       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:24:04.644632       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:24:04.688693       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:24:04.688735       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:04.691257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691335       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691507       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:24:04.691587       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:24:04.791968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012] <==
	I1018 14:23:18.391537       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:23:19.734711       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:23:19.734910       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:23:19.735573       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:23:19.735742       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:23:19.836590       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:23:19.836721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:19.841732       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.841789       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.842895       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:23:19.843086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:23:19.942067       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.656331       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:23:43.656385       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.655643       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:23:43.662060       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:23:43.662165       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:23:43.662197       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 14:36:51 functional-900196 kubelet[5614]: E1018 14:36:51.055951    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798211055276646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:36:51 functional-900196 kubelet[5614]: E1018 14:36:51.055972    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798211055276646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:36:53 functional-900196 kubelet[5614]: E1018 14:36:53.744246    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-lc247" podUID="cc1250e9-51ee-46d8-b2ff-fb0e49ef0d30"
	Oct 18 14:36:56 functional-900196 kubelet[5614]: E1018 14:36:56.743829    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dd4gd" podUID="2d909ba8-2bc8-448c-bf6e-e220108c425f"
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.810968    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod28c9616d-7ca6-4480-bb36-f61b451a4b23/crio-58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Error finding container 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Status 404 returned error can't find the container with id 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.811827    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1539d00838a4465e9c70da2faa0ecce0/crio-88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Error finding container 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Status 404 returned error can't find the container with id 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.812188    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd219f60f-61db-4f59-beb6-f1014320fded/crio-a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Error finding container a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Status 404 returned error can't find the container with id a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.812740    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod97ed5fabf9bf40e88932da5fec13829b/crio-50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Error finding container 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Status 404 returned error can't find the container with id 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.813113    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Error finding container e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Status 404 returned error can't find the container with id e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.813738    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6ab2d89b-2ccc-43cd-874a-1c4e895df2f0/crio-d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Error finding container d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Status 404 returned error can't find the container with id d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55
	Oct 18 14:37:00 functional-900196 kubelet[5614]: E1018 14:37:00.814236    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Error finding container f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Status 404 returned error can't find the container with id f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81
	Oct 18 14:37:01 functional-900196 kubelet[5614]: E1018 14:37:01.057835    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798221057509360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:37:01 functional-900196 kubelet[5614]: E1018 14:37:01.057882    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798221057509360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:37:02 functional-900196 kubelet[5614]: E1018 14:37:02.921985    5614 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:37:02 functional-900196 kubelet[5614]: E1018 14:37:02.922042    5614 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:37:02 functional-900196 kubelet[5614]: E1018 14:37:02.922292    5614 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q_kubernetes-dashboard(a429a741-948b-4a3a-b4f9-355dff740154): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:37:02 functional-900196 kubelet[5614]: E1018 14:37:02.922335    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:37:04 functional-900196 kubelet[5614]: E1018 14:37:04.745059    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-lc247" podUID="cc1250e9-51ee-46d8-b2ff-fb0e49ef0d30"
	Oct 18 14:37:09 functional-900196 kubelet[5614]: E1018 14:37:09.743399    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dd4gd" podUID="2d909ba8-2bc8-448c-bf6e-e220108c425f"
	Oct 18 14:37:11 functional-900196 kubelet[5614]: E1018 14:37:11.059841    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798231059065410  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:37:11 functional-900196 kubelet[5614]: E1018 14:37:11.059863    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798231059065410  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:37:14 functional-900196 kubelet[5614]: E1018 14:37:14.746056    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:37:17 functional-900196 kubelet[5614]: E1018 14:37:17.744387    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-lc247" podUID="cc1250e9-51ee-46d8-b2ff-fb0e49ef0d30"
	Oct 18 14:37:21 functional-900196 kubelet[5614]: E1018 14:37:21.063788    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798241062635827  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 18 14:37:21 functional-900196 kubelet[5614]: E1018 14:37:21.063832    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798241062635827  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8] <==
	W1018 14:36:57.177480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:36:59.180390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:36:59.189736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:01.194107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:01.200178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:03.203351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:03.208760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:05.212129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:05.218206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:07.221892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:07.227450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:09.231321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:09.239946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:11.243826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:11.253066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:13.256192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:13.261099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:15.264239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:15.270196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:17.274525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:17.279772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:19.283755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:19.293740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:21.297407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:21.306086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005] <==
	I1018 14:23:21.202377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 14:23:21.228980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 14:23:21.229023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 14:23:21.238321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:24.693920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:28.954099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:32.552491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:35.607127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.630085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.643928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.644173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 14:23:38.644347       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	I1018 14:23:38.644293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e018e70-a737-4b8a-9686-e3ed69bbe860", APIVersion:"v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c became leader
	W1018 14:23:38.652306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.659508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.745098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	W1018 14:23:40.662333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:40.670966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.675961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.683103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
helpers_test.go:269: (dbg) Run:  kubectl --context functional-900196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1 (120.116529ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:30:38 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:32:12 +0000
	      Finished:     Sat, 18 Oct 2025 14:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrltn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hrltn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m44s  default-scheduler  Successfully assigned default/busybox-mount to functional-900196
	  Normal  Pulling    6m44s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.267s (1m34.172s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9f59p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5bzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5bzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  12m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9f59p to functional-900196
	  Warning  Failed     8m47s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m8s (x3 over 11m)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m8s (x4 over 11m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2m50s (x11 over 11m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m50s (x11 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m37s (x5 over 12m)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dd4gd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9w4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
	  Warning  Failed     6m45s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m19s (x5 over 12m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     50s (x4 over 11m)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x5 over 11m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x13 over 11m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x13 over 11m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-lc247
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-796d9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-796d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  12m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-lc247 to functional-900196
	  Warning  Failed     12m                    kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m46s (x2 over 9m51s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m40s (x5 over 12m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     81s (x5 over 12m)      kubelet            Error: ErrImagePull
	  Warning  Failed     81s (x2 over 5m11s)    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x16 over 12m)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     5s (x16 over 12m)      kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr5x7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hr5x7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  12m                    default-scheduler  Successfully assigned default/sp-pod to functional-900196
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m22s (x4 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m22s (x3 over 8m17s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    63s (x11 over 10m)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     63s (x11 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    51s (x5 over 12m)      kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-kfk2q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mbxqb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-900196 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-900196 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dd4gd" [2d909ba8-2bc8-448c-bf6e-e220108c425f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 14:34:26.402782542 +0000 UTC m=+1560.138139570
functional_test.go:1645: (dbg) Run:  kubectl --context functional-900196 describe po hello-node-connect-7d85dfc575-dd4gd -n default
functional_test.go:1645: (dbg) kubectl --context functional-900196 describe po hello-node-connect-7d85dfc575-dd4gd -n default:
Name:             hello-node-connect-7d85dfc575-dd4gd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900196/192.168.39.34
Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hp9w4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
Warning  Failed     3m49s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x3 over 8m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x4 over 8m57s)  kubelet            Error: ErrImagePull
Normal   BackOff    35s (x9 over 8m57s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     35s (x9 over 8m57s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    23s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-900196 logs hello-node-connect-7d85dfc575-dd4gd -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-900196 logs hello-node-connect-7d85dfc575-dd4gd -n default: exit status 1 (94.213102ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dd4gd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-900196 logs hello-node-connect-7d85dfc575-dd4gd -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-900196 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-dd4gd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900196/192.168.39.34
Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hp9w4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
Warning  Failed     3m49s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x3 over 8m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x4 over 8m57s)  kubelet            Error: ErrImagePull
Normal   BackOff    35s (x9 over 8m57s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     35s (x9 over 8m57s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    23s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-900196 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-900196 logs -l app=hello-node-connect: exit status 1 (90.510633ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dd4gd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-900196 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-900196 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.130.8
IPs:                      10.103.130.8
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30664/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-900196 -n functional-900196
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs -n 25: (1.780334982s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdany-port1051280194/001:/mount-9p --alsologtostderr -v=1                     │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh -- ls -la /mount-9p                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh cat /mount-9p/test-1760797836634853090                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh stat /mount-9p/created-by-test                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh sudo umount -f /mount-9p                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdspecific-port3223273432/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh -- ls -la /mount-9p                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh sudo umount -f /mount-9p                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount2 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount3 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount1                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount1 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount1                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount2                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount3                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ mount     │ -p functional-900196 --kill=true                                                                                                    │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-900196 --alsologtostderr -v=1                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ service   │ functional-900196 service list                                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:32:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:32:20.260818 1770878 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:32:20.261074 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261085 1770878 out.go:374] Setting ErrFile to fd 2...
	I1018 14:32:20.261090 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261277 1770878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:32:20.261753 1770878 out.go:368] Setting JSON to false
	I1018 14:32:20.262755 1770878 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22488,"bootTime":1760775452,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:32:20.262872 1770878 start.go:141] virtualization: kvm guest
	I1018 14:32:20.264871 1770878 out.go:179] * [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:32:20.266558 1770878 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:32:20.266576 1770878 notify.go:220] Checking for updates...
	I1018 14:32:20.268996 1770878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:32:20.270583 1770878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:32:20.271947 1770878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:32:20.276153 1770878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:32:20.277526 1770878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:32:20.279077 1770878 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:32:20.279476 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.279547 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.293619 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I1018 14:32:20.294123 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.294734 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.294763 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.295134 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.295334 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.295663 1770878 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:32:20.296029 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.296083 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.310256 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I1018 14:32:20.310819 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.311405 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.311440 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.311890 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.312119 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.344597 1770878 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 14:32:20.345696 1770878 start.go:305] selected driver: kvm2
	I1018 14:32:20.345710 1770878 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.345818 1770878 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:32:20.346798 1770878 cni.go:84] Creating CNI manager for ""
	I1018 14:32:20.346852 1770878 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:32:20.346901 1770878 start.go:349] cluster config:
	{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.348722 1770878 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.739742860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798067739644105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4930f8eb-25dd-4161-83c0-548b638cf9dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.740937767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c3783e-56b2-41f6-971b-0a131372d351 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.741016575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6c3783e-56b2-41f6-971b-0a131372d351 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.741318331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6c3783e-56b2-41f6-971b-0a131372d351 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.818884471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abc5c2e9-3a05-43f6-8eec-c88457e94254 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.818992255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abc5c2e9-3a05-43f6-8eec-c88457e94254 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.821274792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d77348f1-d19f-400e-aa3c-b551960e3f89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.823105378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798067823034716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d77348f1-d19f-400e-aa3c-b551960e3f89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.825453612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02546a56-6a9a-4e4f-b397-c1f4e61217aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.826058412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02546a56-6a9a-4e4f-b397-c1f4e61217aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.827271027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02546a56-6a9a-4e4f-b397-c1f4e61217aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.874514140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4379d383-f34b-424e-9ce5-d822191f2eec name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.874777518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4379d383-f34b-424e-9ce5-d822191f2eec name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.877190097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1015321f-669c-410f-9369-080445b22021 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.878637365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798067878556425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1015321f-669c-410f-9369-080445b22021 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.879498250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5255c6b-044a-4d28-b02c-efed7e144066 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.879604052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5255c6b-044a-4d28-b02c-efed7e144066 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.879947846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5255c6b-044a-4d28-b02c-efed7e144066 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.930955857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26aa8dc5-b890-4769-bae0-563691299744 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.931310714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26aa8dc5-b890-4769-bae0-563691299744 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.933113618Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f55c7ab3-fefd-450a-bfde-6df4d4ae87ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.934294716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798067934266636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f55c7ab3-fefd-450a-bfde-6df4d4ae87ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.935112544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d37a4835-4c6e-44b5-b054-84e2d6da8175 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.935190969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d37a4835-4c6e-44b5-b054-84e2d6da8175 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:27 functional-900196 crio[5303]: time="2025-10-18 14:34:27.935457924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d37a4835-4c6e-44b5-b054-84e2d6da8175 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a6150c71b2ab       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago       Exited              mount-munger              0                   462f63cd1cd9b       busybox-mount
	8ef6272e34711       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   6c396b3a6d33f       coredns-66bc5c9577-7m2x4
	08267a0026df9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                2                   643097cfed919       kube-proxy-lwq2l
	0e1bccb3b64c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   1915389128f2c       storage-provisioner
	8dbff326b6cc8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   d89012bc5dd1a       kube-controller-manager-functional-900196
	aedc2a498839e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      2                   3f0d73a3a9730       etcd-functional-900196
	b07ae915d18fc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            2                   5dd0badd5d559       kube-scheduler-functional-900196
	53113ba9ccc6d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   a35ae590fec12       kube-apiserver-functional-900196
	88dbdf96d71bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a31dcfcaadf59       coredns-66bc5c9577-7m2x4
	c5b51b4a4c799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   d6fe824678880       storage-provisioner
	5ee1e13dc39ac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                1                   58706c3ba9833       kube-proxy-lwq2l
	e89892ddae1dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            1                   e0e939baaa67d       kube-scheduler-functional-900196
	6c0794ff6e8e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      1                   50b972f1c5236       etcd-functional-900196
	8a1ba16847db0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   1                   88e38926414dc       kube-controller-manager-functional-900196
	
	
	==> coredns [88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42586 - 65064 "HINFO IN 7206085342544834509.5779663432164893704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097798211s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44491 - 9280 "HINFO IN 4407530105380212382.4237632423946435234. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081412794s
	
	
	==> describe nodes <==
	Name:               functional-900196
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-900196
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-900196
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_22_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:22:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-900196
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    functional-900196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9709b49470bb44b2a2d3964a71bb675f
	  System UUID:                9709b494-70bb-44b2-a2d3-964a71bb675f
	  Boot ID:                    07efcc6d-7a9c-407c-bc19-bf481d85f1cc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9f59p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-dd4gd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-lc247                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-7m2x4                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-900196                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-900196              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-900196     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lwq2l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-900196              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-kfk2q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mbxqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeReady                11m                kubelet          Node functional-900196 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	
	
	==> dmesg <==
	[Oct18 14:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000060] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005311] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.172763] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093384] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.140696] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.449328] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.072738] kauditd_printk_skb: 214 callbacks suppressed
	[Oct18 14:23] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.565209] kauditd_printk_skb: 176 callbacks suppressed
	[ +13.742304] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.110846] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.037829] kauditd_printk_skb: 241 callbacks suppressed
	[Oct18 14:24] kauditd_printk_skb: 165 callbacks suppressed
	[  +4.839368] kauditd_printk_skb: 116 callbacks suppressed
	[  +1.092432] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000023] kauditd_printk_skb: 74 callbacks suppressed
	[ +25.947836] kauditd_printk_skb: 26 callbacks suppressed
	[Oct18 14:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.150915] kauditd_printk_skb: 25 callbacks suppressed
	[Oct18 14:33] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44] <==
	{"level":"warn","ts":"2025-10-18T14:23:19.002339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.014970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.015271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.026883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.036277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.044009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.126022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:23:43.632803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:23:43.632873Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	{"level":"error","ts":"2025-10-18T14:23:43.632953Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.719582Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c39268f2da6496d","current-leader-member-id":"6c39268f2da6496d"}
	{"level":"info","ts":"2025-10-18T14:23:43.719736Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:23:43.719767Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:23:43.719987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720085Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720091Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723000Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"error","ts":"2025-10-18T14:23:43.723081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723126Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"info","ts":"2025-10-18T14:23:43.723144Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	
	
	==> etcd [aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0] <==
	{"level":"warn","ts":"2025-10-18T14:24:03.393411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.403553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.411920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.423842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.450107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.475855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.482214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.494263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.519912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.543096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.561736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.582286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.599796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.634238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.669265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.680518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.696148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.708272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.732117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.768874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.789233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.841018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:34:02.646581Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":984}
	{"level":"info","ts":"2025-10-18T14:34:02.657702Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":984,"took":"10.627386ms","hash":3562947326,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-18T14:34:02.657829Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3562947326,"revision":984,"compact-revision":-1}
	
	
	==> kernel <==
	 14:34:28 up 12 min,  0 users,  load average: 0.26, 0.25, 0.22
	Linux functional-900196 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790] <==
	I1018 14:24:04.719486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 14:24:04.728358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:24:04.729330       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 14:24:04.729605       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 14:24:04.729863       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1018 14:24:04.734364       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 14:24:04.736132       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 14:24:04.740621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:24:04.751733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:24:05.530077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 14:24:06.464533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:24:06.515450       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:24:06.545118       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:24:06.554419       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:24:08.049198       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:24:08.334307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:24:08.437412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:24:20.264564       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.161.77"}
	I1018 14:24:24.494152       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.89.83"}
	I1018 14:24:26.055389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.130.8"}
	I1018 14:24:26.172816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.93.197"}
	I1018 14:32:21.327579       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:32:21.622411       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.45.148"}
	I1018 14:32:21.643738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.131.141"}
	I1018 14:34:04.657900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70] <==
	I1018 14:23:23.227105       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 14:23:23.227092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:23:23.228466       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 14:23:23.228646       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:23:23.230757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.230788       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:23:23.230794       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:23:23.234954       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:23:23.236092       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:23:23.237277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:23:23.237366       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:23:23.237450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-900196"
	I1018 14:23:23.237498       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:23:23.239047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.252836       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:23:23.255972       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:23:23.259487       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:23:23.264319       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:23:23.271438       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 14:23:23.275470       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:23:23.275871       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:23:23.277193       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 14:23:23.279880       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:23:23.279907       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:23:23.292993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912] <==
	I1018 14:24:08.082577       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:24:08.082769       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:24:08.082866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:24:08.085006       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:24:08.086420       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:24:08.090627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:24:08.090734       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:24:08.090767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:24:08.091261       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 14:24:08.095383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 14:24:08.097339       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:24:08.098717       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:24:08.098851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:24:08.099335       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:24:08.099383       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:24:08.099390       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:24:08.099395       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:24:08.111890       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:24:08.115418       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1018 14:32:21.454866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.460410       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.470427       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.481978       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.482304       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.500231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784] <==
	I1018 14:24:05.611269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:24:05.714472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:24:05.714927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:24:05.716492       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:24:05.812046       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:24:05.812520       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:24:05.812622       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:24:05.848718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:24:05.849357       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:24:05.849554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:05.857884       1 config.go:200] "Starting service config controller"
	I1018 14:24:05.858034       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:24:05.858120       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:24:05.858304       1 config.go:309] "Starting node config controller"
	I1018 14:24:05.858382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:24:05.858409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:24:05.859956       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:24:05.860073       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:24:05.858159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:24:05.959014       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:24:05.961218       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:24:05.961404       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710] <==
	I1018 14:23:21.305079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:23:21.407136       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:23:21.407229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:23:21.407293       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:23:21.487588       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:23:21.487862       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:23:21.487981       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:23:21.506483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:23:21.508268       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:23:21.508285       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:21.521297       1 config.go:200] "Starting service config controller"
	I1018 14:23:21.531578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:23:21.527226       1 config.go:309] "Starting node config controller"
	I1018 14:23:21.531853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:23:21.531859       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:23:21.530548       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:23:21.531866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:23:21.530113       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:23:21.532475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:23:21.632482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:23:21.632640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:23:21.632695       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d] <==
	I1018 14:24:04.076446       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:24:04.644569       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:24:04.644616       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:24:04.644625       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:24:04.644632       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:24:04.688693       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:24:04.688735       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:04.691257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691335       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691507       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:24:04.691587       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:24:04.791968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012] <==
	I1018 14:23:18.391537       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:23:19.734711       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:23:19.734910       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:23:19.735573       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:23:19.735742       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:23:19.836590       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:23:19.836721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:19.841732       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.841789       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.842895       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:23:19.843086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:23:19.942067       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.656331       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:23:43.656385       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.655643       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:23:43.662060       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:23:43.662165       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:23:43.662197       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 14:33:41 functional-900196 kubelet[5614]: E1018 14:33:41.742758    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:33:50 functional-900196 kubelet[5614]: E1018 14:33:50.991998    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798030990560554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:33:50 functional-900196 kubelet[5614]: E1018 14:33:50.992087    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798030990560554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:33:51 functional-900196 kubelet[5614]: E1018 14:33:51.743015    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dd4gd" podUID="2d909ba8-2bc8-448c-bf6e-e220108c425f"
	Oct 18 14:33:55 functional-900196 kubelet[5614]: E1018 14:33:55.742415    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207504    5614 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207551    5614 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207786    5614 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q_kubernetes-dashboard(a429a741-948b-4a3a-b4f9-355dff740154): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207828    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.992102    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811043    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod28c9616d-7ca6-4480-bb36-f61b451a4b23/crio-58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Error finding container 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Status 404 returned error can't find the container with id 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811735    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Error finding container e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Status 404 returned error can't find the container with id e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811985    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd219f60f-61db-4f59-beb6-f1014320fded/crio-a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Error finding container a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Status 404 returned error can't find the container with id a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812169    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1539d00838a4465e9c70da2faa0ecce0/crio-88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Error finding container 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Status 404 returned error can't find the container with id 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812469    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Error finding container f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Status 404 returned error can't find the container with id f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812717    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod97ed5fabf9bf40e88932da5fec13829b/crio-50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Error finding container 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Status 404 returned error can't find the container with id 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.813078    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6ab2d89b-2ccc-43cd-874a-1c4e895df2f0/crio-d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Error finding container d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Status 404 returned error can't find the container with id d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.996096    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798040994229428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.996135    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798040994229428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:06 functional-900196 kubelet[5614]: E1018 14:34:06.743436    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:34:10 functional-900196 kubelet[5614]: E1018 14:34:10.998641    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798050997952841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:10 functional-900196 kubelet[5614]: E1018 14:34:10.998724    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798050997952841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:17 functional-900196 kubelet[5614]: E1018 14:34:17.743253    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:34:21 functional-900196 kubelet[5614]: E1018 14:34:21.001088    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798060999962241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:21 functional-900196 kubelet[5614]: E1018 14:34:21.001134    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798060999962241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	
	
	==> storage-provisioner [0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8] <==
	W1018 14:34:04.148721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:06.152752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:06.157703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:08.161454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:08.170969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:10.175586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:10.181634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:12.185758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:12.194917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:14.198528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:14.204347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:16.208394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:16.214410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:18.219035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:18.224306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:20.233893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:20.255580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:22.270941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:22.277906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:24.281835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:24.287163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:26.304487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:26.315697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:28.320619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:28.326467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005] <==
	I1018 14:23:21.202377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 14:23:21.228980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 14:23:21.229023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 14:23:21.238321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:24.693920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:28.954099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:32.552491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:35.607127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.630085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.643928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.644173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 14:23:38.644347       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	I1018 14:23:38.644293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e018e70-a737-4b8a-9686-e3ed69bbe860", APIVersion:"v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c became leader
	W1018 14:23:38.652306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.659508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.745098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	W1018 14:23:40.662333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:40.670966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.675961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.683103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
helpers_test.go:269: (dbg) Run:  kubectl --context functional-900196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1 (130.093636ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:30:38 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:32:12 +0000
	      Finished:     Sat, 18 Oct 2025 14:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrltn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hrltn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-900196
	  Normal  Pulling    3m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m17s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.267s (1m34.172s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m17s  kubelet            Created container: mount-munger
	  Normal  Started    2m17s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9f59p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5bzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5bzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9f59p to functional-900196
	  Warning  Failed     5m54s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m31s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     75s (x3 over 8m30s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x4 over 8m30s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x10 over 8m29s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x10 over 8m29s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-dd4gd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9w4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
	  Warning  Failed     3m52s              kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     106s (x3 over 9m)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     106s (x4 over 9m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    38s (x9 over 9m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     38s (x9 over 9m)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    26s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lc247
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-796d9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-796d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-lc247 to functional-900196
	  Warning  Failed     9m31s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m53s (x2 over 6m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m18s (x4 over 9m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m18s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    61s (x10 over 9m30s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     61s (x10 over 9m30s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    47s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr5x7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hr5x7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m56s                  default-scheduler  Successfully assigned default/sp-pod to functional-900196
	  Warning  Failed     7m29s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x3 over 7m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m51s (x2 over 5m24s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m13s (x5 over 7m28s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m13s (x5 over 7m28s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    118s (x4 over 9m56s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-kfk2q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mbxqb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.60s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6ab2d89b-2ccc-43cd-874a-1c4e895df2f0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004407018s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-900196 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-900196 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-900196 get pvc myclaim -o=json
I1018 14:24:31.478718 1759792 retry.go:31] will retry after 1.190978827s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:de277c6a-efda-4c06-a9c3-d4931a644ee3 ResourceVersion:698 Generation:0 CreationTimestamp:2025-10-18 14:24:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018aeed0 VolumeMode:0xc0018aeee0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-900196 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-900196 apply -f testdata/storage-provisioner/pod.yaml
I1018 14:24:32.868237 1759792 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [98ccf3bf-59b1-4a90-9375-1fd0b7584f77] Pending
helpers_test.go:352: "sp-pod" [98ccf3bf-59b1-4a90-9375-1fd0b7584f77] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1018 14:26:00.845105 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:00.851544 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:00.863104 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:00.884731 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:00.926192 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:01.007803 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:01.169913 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:01.492270 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:02.133956 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:03.416236 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:05.977974 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:11.099693 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:21.341127 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:26:41.823442 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:22.785613 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:28:44.707848 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-18 14:30:33.163197171 +0000 UTC m=+1326.898554205
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-900196 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-900196 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900196/192.168.39.34
Start Time:       Sat, 18 Oct 2025 14:24:32 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr5x7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-hr5x7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-900196
Warning  Failed     3m33s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     88s (x2 over 3m33s)  kubelet            Error: ErrImagePull
Warning  Failed     88s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    77s (x2 over 3m32s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     77s (x2 over 3m32s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    63s (x3 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-900196 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-900196 logs sp-pod -n default: exit status 1 (86.103036ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-900196 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-900196 -n functional-900196
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs -n 25: (1.560858639s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ functional-900196 kubectl -- --context functional-900196 get pods                                                          │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:23 UTC │ 18 Oct 25 14:23 UTC │
	│ start   │ -p functional-900196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:23 UTC │ 18 Oct 25 14:24 UTC │
	│ service │ invalid-svc -p functional-900196                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │                     │
	│ ssh     │ functional-900196 ssh sudo cat /etc/test/nested/copy/1759792/hosts                                                         │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ config  │ functional-900196 config unset cpus                                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /etc/ssl/certs/1759792.pem                                                                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ config  │ functional-900196 config get cpus                                                                                          │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │                     │
	│ config  │ functional-900196 config set cpus 2                                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ config  │ functional-900196 config get cpus                                                                                          │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /usr/share/ca-certificates/1759792.pem                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ config  │ functional-900196 config unset cpus                                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ cp      │ functional-900196 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ config  │ functional-900196 config get cpus                                                                                          │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │                     │
	│ ssh     │ functional-900196 ssh echo hello                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh cat /etc/hostname                                                                                    │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /etc/ssl/certs/17597922.pem                                                                 │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ cp      │ functional-900196 cp functional-900196:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2349911493/001/cp-test.txt │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /usr/share/ca-certificates/17597922.pem                                                     │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh -n functional-900196 sudo cat /home/docker/cp-test.txt                                               │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ cp      │ functional-900196 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ addons  │ functional-900196 addons list                                                                                              │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ ssh     │ functional-900196 ssh -n functional-900196 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	│ addons  │ functional-900196 addons list -o json                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:24 UTC │ 18 Oct 25 14:24 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:23:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:23:42.664874 1767290 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:23:42.665118 1767290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:23:42.665121 1767290 out.go:374] Setting ErrFile to fd 2...
	I1018 14:23:42.665125 1767290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:23:42.665331 1767290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:23:42.665865 1767290 out.go:368] Setting JSON to false
	I1018 14:23:42.666892 1767290 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21971,"bootTime":1760775452,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:23:42.666991 1767290 start.go:141] virtualization: kvm guest
	I1018 14:23:42.669198 1767290 out.go:179] * [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:23:42.670729 1767290 notify.go:220] Checking for updates...
	I1018 14:23:42.670751 1767290 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:23:42.672206 1767290 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:23:42.673779 1767290 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:23:42.675041 1767290 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:23:42.676162 1767290 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:23:42.677401 1767290 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:23:42.678911 1767290 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:23:42.678994 1767290 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:23:42.679477 1767290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:23:42.679514 1767290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:23:42.693748 1767290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I1018 14:23:42.694288 1767290 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:23:42.694877 1767290 main.go:141] libmachine: Using API Version  1
	I1018 14:23:42.694900 1767290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:23:42.695388 1767290 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:23:42.695626 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:42.728065 1767290 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 14:23:42.729561 1767290 start.go:305] selected driver: kvm2
	I1018 14:23:42.729584 1767290 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:23:42.729717 1767290 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:23:42.730048 1767290 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:23:42.730132 1767290 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:23:42.744742 1767290 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:23:42.744767 1767290 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:23:42.759289 1767290 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:23:42.760101 1767290 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:23:42.760135 1767290 cni.go:84] Creating CNI manager for ""
	I1018 14:23:42.760218 1767290 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:23:42.760269 1767290 start.go:349] cluster config:
	{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:23:42.760401 1767290 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:23:42.762580 1767290 out.go:179] * Starting "functional-900196" primary control-plane node in "functional-900196" cluster
	I1018 14:23:42.763846 1767290 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:23:42.763879 1767290 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:23:42.763890 1767290 cache.go:58] Caching tarball of preloaded images
	I1018 14:23:42.763992 1767290 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:23:42.763999 1767290 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:23:42.764090 1767290 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/config.json ...
	I1018 14:23:42.764278 1767290 start.go:360] acquireMachinesLock for functional-900196: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 14:23:42.764323 1767290 start.go:364] duration metric: took 29.382µs to acquireMachinesLock for "functional-900196"
	I1018 14:23:42.764364 1767290 start.go:96] Skipping create...Using existing machine configuration
	I1018 14:23:42.764370 1767290 fix.go:54] fixHost starting: 
	I1018 14:23:42.764694 1767290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:23:42.764734 1767290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:23:42.778615 1767290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1018 14:23:42.779083 1767290 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:23:42.779621 1767290 main.go:141] libmachine: Using API Version  1
	I1018 14:23:42.779632 1767290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:23:42.779980 1767290 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:23:42.780166 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:42.780272 1767290 main.go:141] libmachine: (functional-900196) Calling .GetState
	I1018 14:23:42.782388 1767290 fix.go:112] recreateIfNeeded on functional-900196: state=Running err=<nil>
	W1018 14:23:42.782401 1767290 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 14:23:42.784224 1767290 out.go:252] * Updating the running kvm2 "functional-900196" VM ...
	I1018 14:23:42.784245 1767290 machine.go:93] provisionDockerMachine start ...
	I1018 14:23:42.784260 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:42.784490 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:42.787513 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:42.788006 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:42.788025 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:42.788213 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:42.788444 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:42.788640 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:42.788795 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:42.788990 1767290 main.go:141] libmachine: Using SSH client type: native
	I1018 14:23:42.789323 1767290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1018 14:23:42.789332 1767290 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 14:23:42.893116 1767290 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-900196
	
	I1018 14:23:42.893141 1767290 main.go:141] libmachine: (functional-900196) Calling .GetMachineName
	I1018 14:23:42.893456 1767290 buildroot.go:166] provisioning hostname "functional-900196"
	I1018 14:23:42.893486 1767290 main.go:141] libmachine: (functional-900196) Calling .GetMachineName
	I1018 14:23:42.893814 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:42.897222 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:42.897631 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:42.897643 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:42.897991 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:42.898260 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:42.898445 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:42.898566 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:42.898735 1767290 main.go:141] libmachine: Using SSH client type: native
	I1018 14:23:42.898948 1767290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1018 14:23:42.898954 1767290 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-900196 && echo "functional-900196" | sudo tee /etc/hostname
	I1018 14:23:43.018950 1767290 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-900196
	
	I1018 14:23:43.018972 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:43.022641 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.023066 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:43.023091 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.023372 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:43.023602 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:43.023816 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:43.023963 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:43.024075 1767290 main.go:141] libmachine: Using SSH client type: native
	I1018 14:23:43.024291 1767290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1018 14:23:43.024301 1767290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-900196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-900196/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-900196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:23:43.127067 1767290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:23:43.127089 1767290 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 14:23:43.127108 1767290 buildroot.go:174] setting up certificates
	I1018 14:23:43.127118 1767290 provision.go:84] configureAuth start
	I1018 14:23:43.127142 1767290 main.go:141] libmachine: (functional-900196) Calling .GetMachineName
	I1018 14:23:43.127460 1767290 main.go:141] libmachine: (functional-900196) Calling .GetIP
	I1018 14:23:43.130557 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.131028 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:43.131047 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.131205 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:43.134063 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.134470 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:43.134486 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.134775 1767290 provision.go:143] copyHostCerts
	I1018 14:23:43.134825 1767290 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem, removing ...
	I1018 14:23:43.134833 1767290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem
	I1018 14:23:43.134922 1767290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 14:23:43.135120 1767290 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem, removing ...
	I1018 14:23:43.135127 1767290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem
	I1018 14:23:43.135169 1767290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 14:23:43.135265 1767290 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem, removing ...
	I1018 14:23:43.135270 1767290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem
	I1018 14:23:43.135306 1767290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 14:23:43.135412 1767290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.functional-900196 san=[127.0.0.1 192.168.39.34 functional-900196 localhost minikube]
	I1018 14:23:43.298632 1767290 provision.go:177] copyRemoteCerts
	I1018 14:23:43.298689 1767290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:23:43.298714 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:43.301917 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.302246 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:43.302265 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.302526 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:43.302752 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:43.302939 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:43.303090 1767290 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
	I1018 14:23:43.390527 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:23:43.422968 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 14:23:43.456204 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 14:23:43.489428 1767290 provision.go:87] duration metric: took 362.295336ms to configureAuth
	I1018 14:23:43.489450 1767290 buildroot.go:189] setting minikube options for container-runtime
	I1018 14:23:43.489692 1767290 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:23:43.489763 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:43.492808 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.493389 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:43.493420 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:43.493603 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:43.493839 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:43.494003 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:43.494166 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:43.494309 1767290 main.go:141] libmachine: Using SSH client type: native
	I1018 14:23:43.494520 1767290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1018 14:23:43.494532 1767290 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:23:49.207682 1767290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:23:49.207699 1767290 machine.go:96] duration metric: took 6.423446389s to provisionDockerMachine
	I1018 14:23:49.207711 1767290 start.go:293] postStartSetup for "functional-900196" (driver="kvm2")
	I1018 14:23:49.207721 1767290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:23:49.207737 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:49.208097 1767290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:23:49.208123 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:49.210917 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.211432 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:49.211454 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.211712 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:49.211983 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:49.212149 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:49.212267 1767290 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
	I1018 14:23:49.295227 1767290 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:23:49.300952 1767290 info.go:137] Remote host: Buildroot 2025.02
	I1018 14:23:49.300972 1767290 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 14:23:49.301040 1767290 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 14:23:49.301107 1767290 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem -> 17597922.pem in /etc/ssl/certs
	I1018 14:23:49.301170 1767290 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/test/nested/copy/1759792/hosts -> hosts in /etc/test/nested/copy/1759792
	I1018 14:23:49.301210 1767290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1759792
	I1018 14:23:49.315095 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 14:23:49.348155 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/test/nested/copy/1759792/hosts --> /etc/test/nested/copy/1759792/hosts (40 bytes)
	I1018 14:23:49.380540 1767290 start.go:296] duration metric: took 172.812711ms for postStartSetup
	I1018 14:23:49.380577 1767290 fix.go:56] duration metric: took 6.616207962s for fixHost
	I1018 14:23:49.380608 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:49.383992 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.384503 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:49.384527 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.384742 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:49.384987 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:49.385142 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:49.385253 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:49.385432 1767290 main.go:141] libmachine: Using SSH client type: native
	I1018 14:23:49.385653 1767290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1018 14:23:49.385659 1767290 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 14:23:49.492464 1767290 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760797429.484197475
	
	I1018 14:23:49.492481 1767290 fix.go:216] guest clock: 1760797429.484197475
	I1018 14:23:49.492491 1767290 fix.go:229] Guest: 2025-10-18 14:23:49.484197475 +0000 UTC Remote: 2025-10-18 14:23:49.380580211 +0000 UTC m=+6.759750661 (delta=103.617264ms)
	I1018 14:23:49.492549 1767290 fix.go:200] guest clock delta is within tolerance: 103.617264ms
	I1018 14:23:49.492556 1767290 start.go:83] releasing machines lock for "functional-900196", held for 6.728218148s
	I1018 14:23:49.492591 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:49.492950 1767290 main.go:141] libmachine: (functional-900196) Calling .GetIP
	I1018 14:23:49.496201 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.496600 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:49.496643 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.496785 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:49.497323 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:49.497527 1767290 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:23:49.497645 1767290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:23:49.497685 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:49.497849 1767290 ssh_runner.go:195] Run: cat /version.json
	I1018 14:23:49.497865 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
	I1018 14:23:49.501034 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.501267 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.501498 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:49.501535 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.501694 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:49.501739 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:49.501753 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:49.501856 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:49.501929 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
	I1018 14:23:49.501997 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:49.502043 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
	I1018 14:23:49.502115 1767290 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
	I1018 14:23:49.502153 1767290 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
	I1018 14:23:49.502303 1767290 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
	I1018 14:23:49.601429 1767290 ssh_runner.go:195] Run: systemctl --version
	I1018 14:23:49.608215 1767290 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:23:49.756143 1767290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:23:49.763326 1767290 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:23:49.763431 1767290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:23:49.774856 1767290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 14:23:49.774877 1767290 start.go:495] detecting cgroup driver to use...
	I1018 14:23:49.774946 1767290 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:23:49.796601 1767290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:23:49.815980 1767290 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:23:49.816052 1767290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:23:49.837590 1767290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:23:49.855012 1767290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:23:50.046988 1767290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:23:50.225262 1767290 docker.go:234] disabling docker service ...
	I1018 14:23:50.225329 1767290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:23:50.254595 1767290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:23:50.271747 1767290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:23:50.462882 1767290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:23:50.650318 1767290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:23:50.667975 1767290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:23:50.698981 1767290 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:23:50.699040 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.769019 1767290 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 14:23:50.769075 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.813671 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.848502 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.887554 1767290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:23:50.933624 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.958246 1767290 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:50.997043 1767290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:23:51.025538 1767290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:23:51.051112 1767290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:23:51.080129 1767290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:23:51.433660 1767290 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:23:56.696335 1767290 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.262643888s)
	I1018 14:23:56.696371 1767290 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:23:56.696421 1767290 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:23:56.702833 1767290 start.go:563] Will wait 60s for crictl version
	I1018 14:23:56.702904 1767290 ssh_runner.go:195] Run: which crictl
	I1018 14:23:56.707755 1767290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 14:23:56.745723 1767290 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 14:23:56.745795 1767290 ssh_runner.go:195] Run: crio --version
	I1018 14:23:56.779398 1767290 ssh_runner.go:195] Run: crio --version
	I1018 14:23:56.812317 1767290 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 14:23:56.813727 1767290 main.go:141] libmachine: (functional-900196) Calling .GetIP
	I1018 14:23:56.816796 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:56.817116 1767290 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
	I1018 14:23:56.817147 1767290 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
	I1018 14:23:56.817390 1767290 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 14:23:56.824271 1767290 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1018 14:23:56.825537 1767290 kubeadm.go:883] updating cluster {Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:23:56.825672 1767290 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:23:56.825738 1767290 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:23:56.877089 1767290 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:23:56.877102 1767290 crio.go:433] Images already preloaded, skipping extraction
	I1018 14:23:56.877166 1767290 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:23:56.920576 1767290 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:23:56.920593 1767290 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:23:56.920639 1767290 kubeadm.go:934] updating node { 192.168.39.34 8441 v1.34.1 crio true true} ...
	I1018 14:23:56.920766 1767290 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-900196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:23:56.920841 1767290 ssh_runner.go:195] Run: crio config
	I1018 14:23:56.969579 1767290 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1018 14:23:56.969601 1767290 cni.go:84] Creating CNI manager for ""
	I1018 14:23:56.969612 1767290 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:23:56.969623 1767290 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:23:56.969647 1767290 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-900196 NodeName:functional-900196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:23:56.969783 1767290 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-900196"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:23:56.969846 1767290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:23:56.983773 1767290 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:23:56.983842 1767290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:23:56.996974 1767290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1018 14:23:57.019687 1767290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:23:57.042181 1767290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I1018 14:23:57.065333 1767290 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I1018 14:23:57.070194 1767290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:23:57.247948 1767290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:23:57.268032 1767290 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196 for IP: 192.168.39.34
	I1018 14:23:57.268047 1767290 certs.go:195] generating shared ca certs ...
	I1018 14:23:57.268063 1767290 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:23:57.268240 1767290 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 14:23:57.268285 1767290 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 14:23:57.268291 1767290 certs.go:257] generating profile certs ...
	I1018 14:23:57.268414 1767290 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.key
	I1018 14:23:57.268458 1767290 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/apiserver.key.a1f11bd2
	I1018 14:23:57.268485 1767290 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/proxy-client.key
	I1018 14:23:57.268591 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem (1338 bytes)
	W1018 14:23:57.268618 1767290 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792_empty.pem, impossibly tiny 0 bytes
	I1018 14:23:57.268623 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 14:23:57.268641 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:23:57.268669 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:23:57.268685 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 14:23:57.268724 1767290 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 14:23:57.269358 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:23:57.303953 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:23:57.336023 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:23:57.368527 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 14:23:57.400862 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 14:23:57.433362 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:23:57.465490 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:23:57.498127 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:23:57.530259 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem --> /usr/share/ca-certificates/1759792.pem (1338 bytes)
	I1018 14:23:57.562050 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /usr/share/ca-certificates/17597922.pem (1708 bytes)
	I1018 14:23:57.595109 1767290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:23:57.627977 1767290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:23:57.650287 1767290 ssh_runner.go:195] Run: openssl version
	I1018 14:23:57.657609 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1759792.pem && ln -fs /usr/share/ca-certificates/1759792.pem /etc/ssl/certs/1759792.pem"
	I1018 14:23:57.672137 1767290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1759792.pem
	I1018 14:23:57.677957 1767290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:22 /usr/share/ca-certificates/1759792.pem
	I1018 14:23:57.678013 1767290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1759792.pem
	I1018 14:23:57.686143 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1759792.pem /etc/ssl/certs/51391683.0"
	I1018 14:23:57.699257 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17597922.pem && ln -fs /usr/share/ca-certificates/17597922.pem /etc/ssl/certs/17597922.pem"
	I1018 14:23:57.714855 1767290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17597922.pem
	I1018 14:23:57.721193 1767290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:22 /usr/share/ca-certificates/17597922.pem
	I1018 14:23:57.721261 1767290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17597922.pem
	I1018 14:23:57.729739 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17597922.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 14:23:57.743091 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:23:57.758670 1767290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:23:57.765228 1767290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:23:57.765291 1767290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:23:57.773337 1767290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:23:57.786100 1767290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:23:57.791965 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 14:23:57.799892 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 14:23:57.808169 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 14:23:57.816076 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 14:23:57.824116 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 14:23:57.832035 1767290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 14:23:57.839764 1767290 kubeadm.go:400] StartCluster: {Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:23:57.839868 1767290 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:23:57.839951 1767290 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:23:57.886317 1767290 cri.go:89] found id: "88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10"
	I1018 14:23:57.886334 1767290 cri.go:89] found id: "c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005"
	I1018 14:23:57.886337 1767290 cri.go:89] found id: "5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710"
	I1018 14:23:57.886355 1767290 cri.go:89] found id: "e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012"
	I1018 14:23:57.886358 1767290 cri.go:89] found id: "6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44"
	I1018 14:23:57.886362 1767290 cri.go:89] found id: "8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70"
	I1018 14:23:57.886365 1767290 cri.go:89] found id: "61c12e73bd28c0b40093d93d0b943750242e7fc05c61f6274dd87565887b8725"
	I1018 14:23:57.886368 1767290 cri.go:89] found id: "474a7534567b234dff6b2ab0d73bfc4ee4132b57f72b4ab097f3259af19ec5a3"
	I1018 14:23:57.886370 1767290 cri.go:89] found id: "638ecd8475df04c605957b9d565a28ba289fb37486ce9d4a33cbb6f0d74ace10"
	I1018 14:23:57.886381 1767290 cri.go:89] found id: "e1cd3584673ba528768e39368f3207d4f310be22b34b02462949ef644be2c2fe"
	I1018 14:23:57.886384 1767290 cri.go:89] found id: "cb08bb8cbefbcb917c6e76b06b26f2df89a3e1b6ad4eb06a5ffde2094d46da00"
	I1018 14:23:57.886387 1767290 cri.go:89] found id: ""
	I1018 14:23:57.886441 1767290 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
helpers_test.go:269: (dbg) Run:  kubectl --context functional-900196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-900196 describe pod hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-900196 describe pod hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-9f59p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5bzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5bzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9f59p to functional-900196
	  Warning  Failed     4m36s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m (x2 over 4m36s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    108s (x2 over 4m35s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     108s (x2 over 4m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    96s (x3 over 6m9s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dd4gd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9w4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
	  Warning  Failed     2m33s (x2 over 5m6s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m33s (x2 over 5m6s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m20s (x2 over 5m6s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m20s (x2 over 5m6s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m8s (x3 over 6m9s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lc247
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-796d9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-796d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m10s                default-scheduler  Successfully assigned default/mysql-5bb876957f-lc247 to functional-900196
	  Warning  Failed     5m37s                kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x3 over 5m37s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x2 over 3m4s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    33s (x4 over 5m36s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     33s (x4 over 5m36s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x4 over 6m10s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr5x7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hr5x7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-900196
	  Warning  Failed     3m35s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x2 over 3m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    79s (x2 over 3m34s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     79s (x2 over 3m34s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    65s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-900196 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-lc247" [cc1250e9-51ee-46d8-b2ff-fb0e49ef0d30] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-18 14:34:24.853586312 +0000 UTC m=+1558.588943350
functional_test.go:1804: (dbg) Run:  kubectl --context functional-900196 describe po mysql-5bb876957f-lc247 -n default
functional_test.go:1804: (dbg) kubectl --context functional-900196 describe po mysql-5bb876957f-lc247 -n default:
Name:             mysql-5bb876957f-lc247
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900196/192.168.39.34
Start Time:       Sat, 18 Oct 2025 14:24:24 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-796d9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-796d9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-lc247 to functional-900196
Warning  Failed     9m26s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m48s (x2 over 6m53s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m13s (x4 over 9m26s)  kubelet            Error: ErrImagePull
Warning  Failed     2m13s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    56s (x10 over 9m25s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     56s (x10 over 9m25s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    42s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-900196 logs mysql-5bb876957f-lc247 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-900196 logs mysql-5bb876957f-lc247 -n default: exit status 1 (80.682737ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-lc247" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-900196 logs mysql-5bb876957f-lc247 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-900196 -n functional-900196
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs -n 25: (1.914158364s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdany-port1051280194/001:/mount-9p --alsologtostderr -v=1                     │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh -- ls -la /mount-9p                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh cat /mount-9p/test-1760797836634853090                                                                        │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:30 UTC │ 18 Oct 25 14:30 UTC │
	│ ssh       │ functional-900196 ssh stat /mount-9p/created-by-test                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh sudo umount -f /mount-9p                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdspecific-port3223273432/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh -- ls -la /mount-9p                                                                                           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh sudo umount -f /mount-9p                                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount2 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount3 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount1                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ mount     │ -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount1 --alsologtostderr -v=1                  │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ ssh       │ functional-900196 ssh findmnt -T /mount1                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount2                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ ssh       │ functional-900196 ssh findmnt -T /mount3                                                                                            │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │ 18 Oct 25 14:32 UTC │
	│ mount     │ -p functional-900196 --kill=true                                                                                                    │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ start     │ -p functional-900196 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-900196 --alsologtostderr -v=1                                                                      │ functional-900196 │ jenkins │ v1.37.0 │ 18 Oct 25 14:32 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:32:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:32:20.260818 1770878 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:32:20.261074 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261085 1770878 out.go:374] Setting ErrFile to fd 2...
	I1018 14:32:20.261090 1770878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.261277 1770878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:32:20.261753 1770878 out.go:368] Setting JSON to false
	I1018 14:32:20.262755 1770878 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22488,"bootTime":1760775452,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:32:20.262872 1770878 start.go:141] virtualization: kvm guest
	I1018 14:32:20.264871 1770878 out.go:179] * [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:32:20.266558 1770878 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:32:20.266576 1770878 notify.go:220] Checking for updates...
	I1018 14:32:20.268996 1770878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:32:20.270583 1770878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:32:20.271947 1770878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:32:20.276153 1770878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:32:20.277526 1770878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:32:20.279077 1770878 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:32:20.279476 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.279547 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.293619 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I1018 14:32:20.294123 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.294734 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.294763 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.295134 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.295334 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.295663 1770878 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:32:20.296029 1770878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.296083 1770878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.310256 1770878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I1018 14:32:20.310819 1770878 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.311405 1770878 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.311440 1770878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.311890 1770878 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.312119 1770878 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.344597 1770878 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 14:32:20.345696 1770878 start.go:305] selected driver: kvm2
	I1018 14:32:20.345710 1770878 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.345818 1770878 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:32:20.346798 1770878 cni.go:84] Creating CNI manager for ""
	I1018 14:32:20.346852 1770878 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:32:20.346901 1770878 start.go:349] cluster config:
	{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.348722 1770878 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.748494571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798065748470104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c40cff0-987f-4e9e-9eaa-c40f69007686 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.749318667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97f4df31-e0df-4ca6-a834-4fd1c5073709 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.749405440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97f4df31-e0df-4ca6-a834-4fd1c5073709 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.749748991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97f4df31-e0df-4ca6-a834-4fd1c5073709 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.802889318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b1e6921-ac49-43cd-b2b4-2e373d69cd32 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.802988990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b1e6921-ac49-43cd-b2b4-2e373d69cd32 name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.806220235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1e7f102-7313-4cb0-a934-5c272c473ac9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.807372228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798065807343974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1e7f102-7313-4cb0-a934-5c272c473ac9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.809625166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2664582-a19b-4bbe-bafd-53fd795b446d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.809931064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2664582-a19b-4bbe-bafd-53fd795b446d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.810749283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2664582-a19b-4bbe-bafd-53fd795b446d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.852335048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d1fba2d-1773-4e29-9799-7c792edcf60a name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.852409574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d1fba2d-1773-4e29-9799-7c792edcf60a name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.854319073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=569fa325-26bb-45d1-95d3-effd250a6f8b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.855074640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798065854964132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=569fa325-26bb-45d1-95d3-effd250a6f8b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.855828300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b5ccbd2-3479-4f0d-8ae7-33d395935820 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.855905523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b5ccbd2-3479-4f0d-8ae7-33d395935820 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.856199488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b5ccbd2-3479-4f0d-8ae7-33d395935820 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.898394224Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c400719c-5f6d-43c2-87cf-e0a9122cc86f name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.898468642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c400719c-5f6d-43c2-87cf-e0a9122cc86f name=/runtime.v1.RuntimeService/Version
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.900779892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dbacc93-11c7-415e-a5f2-ded3480ec1e6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.901499101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760798065901473774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dbacc93-11c7-415e-a5f2-ded3480ec1e6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.902451522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a65071c1-407f-4209-bce0-68b4746a0583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.902736161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a65071c1-407f-4209-bce0-68b4746a0583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 14:34:25 functional-900196 crio[5303]: time="2025-10-18 14:34:25.903031462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c,PodSandboxId:462f63cd1cd9b507f8b65ae177f769b32917291abbec415412e8cf1d2f7bad32,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760797932903872088,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1c21ed2-b86c-4e19-a613-f6d67149156e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330,PodSandboxId:6c396b3a6d33f5432556b5422742fa4bc0bfd9450fb4a32311d54c98d5a37d0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760797445461154411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784,PodSandboxId:643097cfed919a042fe18fd1a3ba3decb51ffd3e08a2460e6bc52f5766ac082e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760797445279809730,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8,PodSandboxId:1915389128f2ce0a6550ee48b07913f69022b98604f123ff6e8a8e1b36273e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760797445247219494,Labels:map[string]string{io.kubernetes.c
ontainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912,PodSandboxId:d89012bc5dd1a9e67f0d93b8983b794c98bf6b83893054547c7aba1c7a22b45c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760797441534367275,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0,PodSandboxId:3f0d73a3a97301f6a016cca5df90761dc2bdf2226ad014d89451471e0e456d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb
5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760797441504384963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d,PodSandboxId:5dd0badd5d559e2de9a32427d0e5bf6d28cf72338ea93be9001384c7b210ff8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760797441484574039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790,PodSandboxId:a35ae590fec1283d5d898322419c35e8a914d929214811cb84f0e5b076fbbac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{I
mage:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760797441418137967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61fe1a843b72c5a01c6ce296ea0908,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10,PodSandboxId:a31dcfcaadf596f99b1b00b651e185a3a4c
96ef68508ad8e4b58763486df5dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760797401310576970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7m2x4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d219f60f-61db-4f59-beb6-f1014320fded,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005,PodSandboxId:d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760797400979371781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab2d89b-2ccc-43cd-874a-1c4e895df2f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710,PodSandboxId:58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760797400908583885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lwq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c9616d-7ca6-4480-bb36-f61b451a4b23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44,PodSandboxId:50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760797397092229596,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ed5fabf9bf40e88932da5fec13829b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPor
t\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012,PodSandboxId:e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760797397121488699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-900196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0adc310d24a81dac60c5ad1f35e7c92b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70,PodSandboxId:88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760797397082298798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-900196,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1539d00838a4465e9c70da2faa0ecce0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a65071c1-407f-4209-bce0-68b4746a0583 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a6150c71b2ab       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago       Exited              mount-munger              0                   462f63cd1cd9b       busybox-mount
	8ef6272e34711       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   6c396b3a6d33f       coredns-66bc5c9577-7m2x4
	08267a0026df9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                2                   643097cfed919       kube-proxy-lwq2l
	0e1bccb3b64c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   1915389128f2c       storage-provisioner
	8dbff326b6cc8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   d89012bc5dd1a       kube-controller-manager-functional-900196
	aedc2a498839e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      2                   3f0d73a3a9730       etcd-functional-900196
	b07ae915d18fc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            2                   5dd0badd5d559       kube-scheduler-functional-900196
	53113ba9ccc6d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   a35ae590fec12       kube-apiserver-functional-900196
	88dbdf96d71bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a31dcfcaadf59       coredns-66bc5c9577-7m2x4
	c5b51b4a4c799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   d6fe824678880       storage-provisioner
	5ee1e13dc39ac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                1                   58706c3ba9833       kube-proxy-lwq2l
	e89892ddae1dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            1                   e0e939baaa67d       kube-scheduler-functional-900196
	6c0794ff6e8e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      1                   50b972f1c5236       etcd-functional-900196
	8a1ba16847db0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   1                   88e38926414dc       kube-controller-manager-functional-900196
	
	
	==> coredns [88dbdf96d71bbe880891ce43151faca2a406ca0a6bc43163813a482e8e7b4b10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42586 - 65064 "HINFO IN 7206085342544834509.5779663432164893704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097798211s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ef6272e347114f48d4fe3e59f62f8fbd9d6fe65a3c2376c1e41119952c7a330] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44491 - 9280 "HINFO IN 4407530105380212382.4237632423946435234. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081412794s
	
	
	==> describe nodes <==
	Name:               functional-900196
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-900196
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-900196
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_22_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:22:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-900196
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:34:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    functional-900196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9709b49470bb44b2a2d3964a71bb675f
	  System UUID:                9709b494-70bb-44b2-a2d3-964a71bb675f
	  Boot ID:                    07efcc6d-7a9c-407c-bc19-bf481d85f1cc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9f59p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-dd4gd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-lc247                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-7m2x4                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-900196                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-900196              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-900196     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lwq2l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-900196              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-kfk2q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mbxqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeReady                11m                kubelet          Node functional-900196 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-900196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-900196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-900196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-900196 event: Registered Node functional-900196 in Controller
	
	
	==> dmesg <==
	[Oct18 14:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000060] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005311] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.172763] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093384] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.140696] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.449328] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.072738] kauditd_printk_skb: 214 callbacks suppressed
	[Oct18 14:23] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.565209] kauditd_printk_skb: 176 callbacks suppressed
	[ +13.742304] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.110846] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.037829] kauditd_printk_skb: 241 callbacks suppressed
	[Oct18 14:24] kauditd_printk_skb: 165 callbacks suppressed
	[  +4.839368] kauditd_printk_skb: 116 callbacks suppressed
	[  +1.092432] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000023] kauditd_printk_skb: 74 callbacks suppressed
	[ +25.947836] kauditd_printk_skb: 26 callbacks suppressed
	[Oct18 14:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.150915] kauditd_printk_skb: 25 callbacks suppressed
	[Oct18 14:33] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [6c0794ff6e8e5711e73c6ed64f56ecf0f6dc92706a4d204ee111f11290cf2e44] <==
	{"level":"warn","ts":"2025-10-18T14:23:19.002339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.014970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.015271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.026883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.036277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.044009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:23:19.126022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:23:43.632803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:23:43.632873Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	{"level":"error","ts":"2025-10-18T14:23:43.632953Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:23:43.719543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.719582Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c39268f2da6496d","current-leader-member-id":"6c39268f2da6496d"}
	{"level":"info","ts":"2025-10-18T14:23:43.719736Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:23:43.719767Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:23:43.719987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:23:43.720085Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.34:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:23:43.720091Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723000Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"error","ts":"2025-10-18T14:23:43.723081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.34:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:23:43.723126Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.34:2380"}
	{"level":"info","ts":"2025-10-18T14:23:43.723144Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-900196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.34:2380"],"advertise-client-urls":["https://192.168.39.34:2379"]}
	
	
	==> etcd [aedc2a498839e385f7a9db347ff30ad23bb639573ca6c9ff50a4254948df22d0] <==
	{"level":"warn","ts":"2025-10-18T14:24:03.393411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.403553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.411920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.423842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.450107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.475855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.482214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.494263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.519912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.543096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.561736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.582286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.599796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.634238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.669265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.680518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.696148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.708272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.732117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.768874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.789233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:24:03.841018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:34:02.646581Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":984}
	{"level":"info","ts":"2025-10-18T14:34:02.657702Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":984,"took":"10.627386ms","hash":3562947326,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-18T14:34:02.657829Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3562947326,"revision":984,"compact-revision":-1}
	
	
	==> kernel <==
	 14:34:26 up 12 min,  0 users,  load average: 0.11, 0.22, 0.21
	Linux functional-900196 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [53113ba9ccc6d684cb2173828ed00fedd934e317a9477387680bd35747276790] <==
	I1018 14:24:04.719486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 14:24:04.728358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:24:04.729330       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 14:24:04.729605       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 14:24:04.729863       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1018 14:24:04.734364       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 14:24:04.736132       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 14:24:04.740621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:24:04.751733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:24:05.530077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 14:24:06.464533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:24:06.515450       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:24:06.545118       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:24:06.554419       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:24:08.049198       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:24:08.334307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:24:08.437412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:24:20.264564       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.161.77"}
	I1018 14:24:24.494152       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.89.83"}
	I1018 14:24:26.055389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.130.8"}
	I1018 14:24:26.172816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.93.197"}
	I1018 14:32:21.327579       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:32:21.622411       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.45.148"}
	I1018 14:32:21.643738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.131.141"}
	I1018 14:34:04.657900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8a1ba16847db09edd496b432d3f8beb8e87e3ad268c294da60db67bc799aad70] <==
	I1018 14:23:23.227105       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 14:23:23.227092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:23:23.228466       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 14:23:23.228646       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 14:23:23.230757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.230788       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:23:23.230794       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:23:23.234954       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:23:23.236092       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:23:23.237277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:23:23.237366       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:23:23.237450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-900196"
	I1018 14:23:23.237498       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:23:23.239047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:23:23.252836       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:23:23.255972       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:23:23.259487       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 14:23:23.264319       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:23:23.271438       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 14:23:23.275470       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:23:23.275871       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:23:23.277193       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 14:23:23.279880       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:23:23.279907       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:23:23.292993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [8dbff326b6cc8afa6d03358920f9853179471569f784102d88c64cdf4fd85912] <==
	I1018 14:24:08.082577       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 14:24:08.082769       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 14:24:08.082866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 14:24:08.085006       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:24:08.086420       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:24:08.090627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:24:08.090734       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:24:08.090767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:24:08.091261       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 14:24:08.095383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 14:24:08.097339       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 14:24:08.098717       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:24:08.098851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:24:08.099335       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:24:08.099383       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:24:08.099390       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:24:08.099395       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:24:08.111890       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 14:24:08.115418       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1018 14:32:21.454866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.460410       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.470427       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.481978       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.482304       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:32:21.500231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [08267a0026df926525df9dfe06132bd39e9bdc06eb9ee97f4286651cddabc784] <==
	I1018 14:24:05.611269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:24:05.714472       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:24:05.714927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:24:05.716492       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:24:05.812046       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:24:05.812520       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:24:05.812622       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:24:05.848718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:24:05.849357       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:24:05.849554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:05.857884       1 config.go:200] "Starting service config controller"
	I1018 14:24:05.858034       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:24:05.858120       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:24:05.858304       1 config.go:309] "Starting node config controller"
	I1018 14:24:05.858382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:24:05.858409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:24:05.859956       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:24:05.860073       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:24:05.858159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:24:05.959014       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:24:05.961218       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:24:05.961404       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [5ee1e13dc39acf818c45b34aab5a553b0357925c855ed6903a3974b7e38fd710] <==
	I1018 14:23:21.305079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:23:21.407136       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:23:21.407229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1018 14:23:21.407293       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:23:21.487588       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 14:23:21.487862       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 14:23:21.487981       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:23:21.506483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:23:21.508268       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:23:21.508285       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:21.521297       1 config.go:200] "Starting service config controller"
	I1018 14:23:21.531578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:23:21.527226       1 config.go:309] "Starting node config controller"
	I1018 14:23:21.531853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:23:21.531859       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:23:21.530548       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:23:21.531866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:23:21.530113       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:23:21.532475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:23:21.632482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:23:21.632640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:23:21.632695       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b07ae915d18fc54175e6566e7805a5718e41da62fd8311a3fe672c42d5f4ba4d] <==
	I1018 14:24:04.076446       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:24:04.644569       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:24:04.644616       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:24:04.644625       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:24:04.644632       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:24:04.688693       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:24:04.688735       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:24:04.691257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691335       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:24:04.691507       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:24:04.691587       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:24:04.791968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e89892ddae1dc2d13c768357fc4a9f9f5f5676dbe163ddcf14af300adb499012] <==
	I1018 14:23:18.391537       1 serving.go:386] Generated self-signed cert in-memory
	W1018 14:23:19.734711       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 14:23:19.734910       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 14:23:19.735573       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 14:23:19.735742       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 14:23:19.836590       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 14:23:19.836721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:23:19.841732       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.841789       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:19.842895       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 14:23:19.843086       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 14:23:19.942067       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.656331       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:23:43.656385       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:23:43.655643       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:23:43.662060       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:23:43.662165       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:23:43.662197       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 14:33:41 functional-900196 kubelet[5614]: E1018 14:33:41.742758    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:33:50 functional-900196 kubelet[5614]: E1018 14:33:50.991998    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798030990560554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:33:50 functional-900196 kubelet[5614]: E1018 14:33:50.992087    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798030990560554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:33:51 functional-900196 kubelet[5614]: E1018 14:33:51.743015    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dd4gd" podUID="2d909ba8-2bc8-448c-bf6e-e220108c425f"
	Oct 18 14:33:55 functional-900196 kubelet[5614]: E1018 14:33:55.742415    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207504    5614 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207551    5614 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207786    5614 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q_kubernetes-dashboard(a429a741-948b-4a3a-b4f9-355dff740154): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.207828    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:33:59 functional-900196 kubelet[5614]: E1018 14:33:59.992102    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kfk2q" podUID="a429a741-948b-4a3a-b4f9-355dff740154"
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811043    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod28c9616d-7ca6-4480-bb36-f61b451a4b23/crio-58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Error finding container 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73: Status 404 returned error can't find the container with id 58706c3ba9833de77c1199c0be5d66ba9b5d1175cad1f0aa8ca286571a930d73
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811735    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Error finding container e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3: Status 404 returned error can't find the container with id e0e939baaa67d0fd4f6816b8d93aa969b6b5bf84197f0d2445e6e3e01e191cd3
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.811985    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd219f60f-61db-4f59-beb6-f1014320fded/crio-a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Error finding container a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3: Status 404 returned error can't find the container with id a31dcfcaadf596f99b1b00b651e185a3a4c96ef68508ad8e4b58763486df5dd3
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812169    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1539d00838a4465e9c70da2faa0ecce0/crio-88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Error finding container 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab: Status 404 returned error can't find the container with id 88e38926414dc77c4f11b9e11c309696d7379acb8fe1aa3716948b3c8f7f43ab
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812469    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0adc310d24a81dac60c5ad1f35e7c92b/crio-f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Error finding container f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81: Status 404 returned error can't find the container with id f4ca0e130b5a974969af0faccb851fe8406129db6f0728de9417aea5c09a6d81
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.812717    5614 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod97ed5fabf9bf40e88932da5fec13829b/crio-50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Error finding container 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229: Status 404 returned error can't find the container with id 50b972f1c52368f0fbc439ffc234e98462143a277880a96bd1e67be20b278229
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.813078    5614 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6ab2d89b-2ccc-43cd-874a-1c4e895df2f0/crio-d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Error finding container d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55: Status 404 returned error can't find the container with id d6fe8246788800a71673d766b79d51cda6360d6c9b9f1f5de1c292ab7ae27b55
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.996096    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798040994229428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:00 functional-900196 kubelet[5614]: E1018 14:34:00.996135    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798040994229428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:06 functional-900196 kubelet[5614]: E1018 14:34:06.743436    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:34:10 functional-900196 kubelet[5614]: E1018 14:34:10.998641    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798050997952841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:10 functional-900196 kubelet[5614]: E1018 14:34:10.998724    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798050997952841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:17 functional-900196 kubelet[5614]: E1018 14:34:17.743253    5614 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-9f59p" podUID="2c9afed2-19c4-4b3d-8f01-a136ceebbe4b"
	Oct 18 14:34:21 functional-900196 kubelet[5614]: E1018 14:34:21.001088    5614 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760798060999962241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 18 14:34:21 functional-900196 kubelet[5614]: E1018 14:34:21.001134    5614 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760798060999962241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	
	
	==> storage-provisioner [0e1bccb3b64c5b5b17aec547dccfe3b87145e92687850c7b5f2eeb2fbecd51b8] <==
	W1018 14:34:02.139095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:04.142602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:04.148721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:06.152752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:06.157703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:08.161454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:08.170969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:10.175586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:10.181634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:12.185758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:12.194917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:14.198528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:14.204347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:16.208394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:16.214410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:18.219035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:18.224306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:20.233893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:20.255580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:22.270941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:22.277906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:24.281835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:24.287163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:26.304487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:26.315697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c5b51b4a4c799496c3145843cf20f4bb06e303ff8f4c636509258d860fa6f005] <==
	I1018 14:23:21.202377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 14:23:21.228980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 14:23:21.229023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 14:23:21.238321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:24.693920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:28.954099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:32.552491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:35.607127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.630085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.643928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.644173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 14:23:38.644347       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	I1018 14:23:38.644293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e018e70-a737-4b8a-9686-e3ed69bbe860", APIVersion:"v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c became leader
	W1018 14:23:38.652306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:38.659508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 14:23:38.745098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-900196_34b3e5c2-cb77-42e0-8936-58509692af6c!
	W1018 14:23:40.662333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:40.670966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.675961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:23:42.683103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
helpers_test.go:269: (dbg) Run:  kubectl --context functional-900196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1 (161.284266ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:30:38 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3a6150c71b2aba5471783b062a7b940e5d8823a4ffdc974bc8cbcafef4b47a8c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:32:12 +0000
	      Finished:     Sat, 18 Oct 2025 14:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrltn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hrltn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-900196
	  Normal  Pulling    3m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.267s (1m34.172s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m15s  kubelet            Created container: mount-munger
	  Normal  Started    2m15s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9f59p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5bzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5bzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9f59p to functional-900196
	  Warning  Failed     5m52s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m29s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     73s (x3 over 8m28s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x4 over 8m28s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x10 over 8m27s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x10 over 8m27s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-dd4gd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9w4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9w4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dd4gd to functional-900196
	  Warning  Failed     3m50s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     104s (x3 over 8m58s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     104s (x4 over 8m58s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    36s (x9 over 8m58s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     36s (x9 over 8m58s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lc247
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-796d9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-796d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-lc247 to functional-900196
	  Warning  Failed     9m29s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m51s (x2 over 6m56s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m16s (x4 over 9m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m16s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    59s (x10 over 9m28s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     59s (x10 over 9m28s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900196/192.168.39.34
	Start Time:       Sat, 18 Oct 2025 14:24:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr5x7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hr5x7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m54s                  default-scheduler  Successfully assigned default/sp-pod to functional-900196
	  Warning  Failed     7m27s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m49s (x3 over 7m27s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m49s (x2 over 5m22s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m11s (x5 over 7m26s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m11s (x5 over 7m26s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    116s (x4 over 9m54s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-kfk2q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mbxqb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-900196 describe pod busybox-mount hello-node-75c85bcc94-9f59p hello-node-connect-7d85dfc575-dd4gd mysql-5bb876957f-lc247 sp-pod dashboard-metrics-scraper-77bf4d6c4c-kfk2q kubernetes-dashboard-855c9754f9-mbxqb: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-900196 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-900196 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9f59p" [2c9afed2-19c4-4b3d-8f01-a136ceebbe4b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-900196 -n functional-900196
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 14:34:26.545038714 +0000 UTC m=+1560.280395755
functional_test.go:1460: (dbg) Run:  kubectl --context functional-900196 describe po hello-node-75c85bcc94-9f59p -n default
functional_test.go:1460: (dbg) kubectl --context functional-900196 describe po hello-node-75c85bcc94-9f59p -n default:
Name:             hello-node-75c85bcc94-9f59p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900196/192.168.39.34
Start Time:       Sat, 18 Oct 2025 14:24:26 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5bzj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j5bzj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9f59p to functional-900196
Warning  Failed     5m51s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m28s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     72s (x3 over 8m27s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     72s (x4 over 8m27s)  kubelet            Error: ErrImagePull
Normal   BackOff    9s (x10 over 8m26s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     9s (x10 over 8m26s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-900196 logs hello-node-75c85bcc94-9f59p -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-900196 logs hello-node-75c85bcc94-9f59p -n default: exit status 1 (87.316783ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9f59p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-900196 logs hello-node-75c85bcc94-9f59p -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 service --namespace=default --https --url hello-node: exit status 115 (361.158094ms)

                                                
                                                
-- stdout --
	https://192.168.39.34:31908
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-900196 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 service hello-node --url --format={{.IP}}: exit status 115 (377.525478ms)

                                                
                                                
-- stdout --
	192.168.39.34
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-900196 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 service hello-node --url: exit status 115 (421.464901ms)

                                                
                                                
-- stdout --
	http://192.168.39.34:31908
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-900196 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.34:31908
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestPreload (164.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-490392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-490392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m43.621453282s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-490392 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-490392 image pull gcr.io/k8s-minikube/busybox: (1.401096119s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-490392
E1018 15:19:24.563551 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-490392: (7.012555444s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-490392 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-490392 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.002173882s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-490392 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-18 15:20:19.955609316 +0000 UTC m=+4313.690966341
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-490392 -n test-preload-490392
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-490392 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-490392 logs -n 25: (1.151342784s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-019263 ssh -n multinode-019263-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ ssh     │ multinode-019263 ssh -n multinode-019263 sudo cat /home/docker/cp-test_multinode-019263-m03_multinode-019263.txt                                                                    │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ cp      │ multinode-019263 cp multinode-019263-m03:/home/docker/cp-test.txt multinode-019263-m02:/home/docker/cp-test_multinode-019263-m03_multinode-019263-m02.txt                           │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ ssh     │ multinode-019263 ssh -n multinode-019263-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ ssh     │ multinode-019263 ssh -n multinode-019263-m02 sudo cat /home/docker/cp-test_multinode-019263-m03_multinode-019263-m02.txt                                                            │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ node    │ multinode-019263 node stop m03                                                                                                                                                      │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ node    │ multinode-019263 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ node    │ list -p multinode-019263                                                                                                                                                            │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p multinode-019263                                                                                                                                                                 │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:09 UTC │
	│ start   │ -p multinode-019263 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:09 UTC │ 18 Oct 25 15:11 UTC │
	│ node    │ list -p multinode-019263                                                                                                                                                            │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:11 UTC │                     │
	│ node    │ multinode-019263 node delete m03                                                                                                                                                    │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:11 UTC │ 18 Oct 25 15:12 UTC │
	│ stop    │ multinode-019263 stop                                                                                                                                                               │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:12 UTC │ 18 Oct 25 15:14 UTC │
	│ start   │ -p multinode-019263 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:14 UTC │ 18 Oct 25 15:16 UTC │
	│ node    │ list -p multinode-019263                                                                                                                                                            │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:16 UTC │                     │
	│ start   │ -p multinode-019263-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-019263-m02 │ jenkins │ v1.37.0 │ 18 Oct 25 15:16 UTC │                     │
	│ start   │ -p multinode-019263-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-019263-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 15:16 UTC │ 18 Oct 25 15:17 UTC │
	│ node    │ add -p multinode-019263                                                                                                                                                             │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:17 UTC │                     │
	│ delete  │ -p multinode-019263-m03                                                                                                                                                             │ multinode-019263-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 15:17 UTC │ 18 Oct 25 15:17 UTC │
	│ delete  │ -p multinode-019263                                                                                                                                                                 │ multinode-019263     │ jenkins │ v1.37.0 │ 18 Oct 25 15:17 UTC │ 18 Oct 25 15:17 UTC │
	│ start   │ -p test-preload-490392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-490392  │ jenkins │ v1.37.0 │ 18 Oct 25 15:17 UTC │ 18 Oct 25 15:19 UTC │
	│ image   │ test-preload-490392 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-490392  │ jenkins │ v1.37.0 │ 18 Oct 25 15:19 UTC │ 18 Oct 25 15:19 UTC │
	│ stop    │ -p test-preload-490392                                                                                                                                                              │ test-preload-490392  │ jenkins │ v1.37.0 │ 18 Oct 25 15:19 UTC │ 18 Oct 25 15:19 UTC │
	│ start   │ -p test-preload-490392 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-490392  │ jenkins │ v1.37.0 │ 18 Oct 25 15:19 UTC │ 18 Oct 25 15:20 UTC │
	│ image   │ test-preload-490392 image list                                                                                                                                                      │ test-preload-490392  │ jenkins │ v1.37.0 │ 18 Oct 25 15:20 UTC │ 18 Oct 25 15:20 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:19:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:19:30.778934 1795748 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:19:30.779218 1795748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:19:30.779231 1795748 out.go:374] Setting ErrFile to fd 2...
	I1018 15:19:30.779237 1795748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:19:30.779473 1795748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:19:30.780004 1795748 out.go:368] Setting JSON to false
	I1018 15:19:30.780980 1795748 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25319,"bootTime":1760775452,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:19:30.781084 1795748 start.go:141] virtualization: kvm guest
	I1018 15:19:30.783131 1795748 out.go:179] * [test-preload-490392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:19:30.784839 1795748 notify.go:220] Checking for updates...
	I1018 15:19:30.784874 1795748 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:19:30.786414 1795748 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:19:30.788134 1795748 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:19:30.789445 1795748 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 15:19:30.791045 1795748 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:19:30.792322 1795748 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:19:30.793998 1795748 config.go:182] Loaded profile config "test-preload-490392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 15:19:30.794411 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:19:30.794457 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:19:30.808288 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I1018 15:19:30.808836 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:19:30.809326 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:19:30.809358 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:19:30.809777 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:19:30.809957 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:30.812035 1795748 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 15:19:30.813485 1795748 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:19:30.813813 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:19:30.813891 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:19:30.828132 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I1018 15:19:30.828701 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:19:30.829249 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:19:30.829272 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:19:30.829621 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:19:30.829839 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:30.865400 1795748 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 15:19:30.866605 1795748 start.go:305] selected driver: kvm2
	I1018 15:19:30.866624 1795748 start.go:925] validating driver "kvm2" against &{Name:test-preload-490392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-490392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:19:30.866729 1795748 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:19:30.867497 1795748 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:19:30.867599 1795748 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:19:30.882184 1795748 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:19:30.882223 1795748 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:19:30.896383 1795748 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:19:30.896826 1795748 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:19:30.896859 1795748 cni.go:84] Creating CNI manager for ""
	I1018 15:19:30.896914 1795748 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:19:30.896966 1795748 start.go:349] cluster config:
	{Name:test-preload-490392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-490392 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:19:30.897060 1795748 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:19:30.899583 1795748 out.go:179] * Starting "test-preload-490392" primary control-plane node in "test-preload-490392" cluster
	I1018 15:19:30.900804 1795748 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 15:19:30.929772 1795748 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 15:19:30.929812 1795748 cache.go:58] Caching tarball of preloaded images
	I1018 15:19:30.929995 1795748 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 15:19:30.931732 1795748 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1018 15:19:30.933029 1795748 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 15:19:30.977976 1795748 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1018 15:19:30.978042 1795748 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 15:19:33.332876 1795748 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1018 15:19:33.333050 1795748 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/config.json ...
	I1018 15:19:33.333318 1795748 start.go:360] acquireMachinesLock for test-preload-490392: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 15:19:33.333449 1795748 start.go:364] duration metric: took 102.515µs to acquireMachinesLock for "test-preload-490392"
	I1018 15:19:33.333475 1795748 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:19:33.333482 1795748 fix.go:54] fixHost starting: 
	I1018 15:19:33.333783 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:19:33.333828 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:19:33.347723 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I1018 15:19:33.348268 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:19:33.348765 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:19:33.348789 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:19:33.349178 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:19:33.349422 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:33.349633 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetState
	I1018 15:19:33.351553 1795748 fix.go:112] recreateIfNeeded on test-preload-490392: state=Stopped err=<nil>
	I1018 15:19:33.351590 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	W1018 15:19:33.351776 1795748 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:19:33.353817 1795748 out.go:252] * Restarting existing kvm2 VM for "test-preload-490392" ...
	I1018 15:19:33.353848 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Start
	I1018 15:19:33.354015 1795748 main.go:141] libmachine: (test-preload-490392) starting domain...
	I1018 15:19:33.354033 1795748 main.go:141] libmachine: (test-preload-490392) ensuring networks are active...
	I1018 15:19:33.354899 1795748 main.go:141] libmachine: (test-preload-490392) Ensuring network default is active
	I1018 15:19:33.355316 1795748 main.go:141] libmachine: (test-preload-490392) Ensuring network mk-test-preload-490392 is active
	I1018 15:19:33.355772 1795748 main.go:141] libmachine: (test-preload-490392) getting domain XML...
	I1018 15:19:33.356917 1795748 main.go:141] libmachine: (test-preload-490392) DBG | starting domain XML:
	I1018 15:19:33.356938 1795748 main.go:141] libmachine: (test-preload-490392) DBG | <domain type='kvm'>
	I1018 15:19:33.356949 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <name>test-preload-490392</name>
	I1018 15:19:33.356962 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <uuid>eec9d1e8-645e-4634-bc61-41d52382d0ec</uuid>
	I1018 15:19:33.356973 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 15:19:33.356984 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 15:19:33.356990 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 15:19:33.356997 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <os>
	I1018 15:19:33.357004 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 15:19:33.357009 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <boot dev='cdrom'/>
	I1018 15:19:33.357014 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <boot dev='hd'/>
	I1018 15:19:33.357024 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <bootmenu enable='no'/>
	I1018 15:19:33.357033 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   </os>
	I1018 15:19:33.357045 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <features>
	I1018 15:19:33.357054 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <acpi/>
	I1018 15:19:33.357064 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <apic/>
	I1018 15:19:33.357072 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <pae/>
	I1018 15:19:33.357078 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   </features>
	I1018 15:19:33.357116 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 15:19:33.357138 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <clock offset='utc'/>
	I1018 15:19:33.357150 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 15:19:33.357163 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <on_reboot>restart</on_reboot>
	I1018 15:19:33.357172 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <on_crash>destroy</on_crash>
	I1018 15:19:33.357182 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   <devices>
	I1018 15:19:33.357192 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 15:19:33.357216 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <disk type='file' device='cdrom'>
	I1018 15:19:33.357239 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <driver name='qemu' type='raw'/>
	I1018 15:19:33.357258 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/boot2docker.iso'/>
	I1018 15:19:33.357269 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 15:19:33.357276 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <readonly/>
	I1018 15:19:33.357285 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 15:19:33.357290 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </disk>
	I1018 15:19:33.357298 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <disk type='file' device='disk'>
	I1018 15:19:33.357304 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 15:19:33.357314 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/test-preload-490392.rawdisk'/>
	I1018 15:19:33.357339 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <target dev='hda' bus='virtio'/>
	I1018 15:19:33.357381 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 15:19:33.357396 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </disk>
	I1018 15:19:33.357408 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 15:19:33.357424 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 15:19:33.357434 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </controller>
	I1018 15:19:33.357443 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 15:19:33.357451 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 15:19:33.357463 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 15:19:33.357474 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </controller>
	I1018 15:19:33.357484 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <interface type='network'>
	I1018 15:19:33.357514 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <mac address='52:54:00:0b:af:24'/>
	I1018 15:19:33.357527 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <source network='mk-test-preload-490392'/>
	I1018 15:19:33.357545 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <model type='virtio'/>
	I1018 15:19:33.357556 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 15:19:33.357574 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </interface>
	I1018 15:19:33.357587 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <interface type='network'>
	I1018 15:19:33.357598 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <mac address='52:54:00:dd:ba:38'/>
	I1018 15:19:33.357607 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <source network='default'/>
	I1018 15:19:33.357618 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <model type='virtio'/>
	I1018 15:19:33.357634 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 15:19:33.357647 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </interface>
	I1018 15:19:33.357659 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <serial type='pty'>
	I1018 15:19:33.357671 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <target type='isa-serial' port='0'>
	I1018 15:19:33.357680 1795748 main.go:141] libmachine: (test-preload-490392) DBG |         <model name='isa-serial'/>
	I1018 15:19:33.357690 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       </target>
	I1018 15:19:33.357699 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </serial>
	I1018 15:19:33.357709 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <console type='pty'>
	I1018 15:19:33.357717 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <target type='serial' port='0'/>
	I1018 15:19:33.357726 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </console>
	I1018 15:19:33.357733 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <input type='mouse' bus='ps2'/>
	I1018 15:19:33.357745 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 15:19:33.357756 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <audio id='1' type='none'/>
	I1018 15:19:33.357766 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <memballoon model='virtio'>
	I1018 15:19:33.357778 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 15:19:33.357789 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </memballoon>
	I1018 15:19:33.357806 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     <rng model='virtio'>
	I1018 15:19:33.357826 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <backend model='random'>/dev/random</backend>
	I1018 15:19:33.357852 1795748 main.go:141] libmachine: (test-preload-490392) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 15:19:33.357862 1795748 main.go:141] libmachine: (test-preload-490392) DBG |     </rng>
	I1018 15:19:33.357871 1795748 main.go:141] libmachine: (test-preload-490392) DBG |   </devices>
	I1018 15:19:33.357880 1795748 main.go:141] libmachine: (test-preload-490392) DBG | </domain>
	I1018 15:19:33.357906 1795748 main.go:141] libmachine: (test-preload-490392) DBG | 
	I1018 15:19:34.720890 1795748 main.go:141] libmachine: (test-preload-490392) waiting for domain to start...
	I1018 15:19:34.722164 1795748 main.go:141] libmachine: (test-preload-490392) domain is now running
	I1018 15:19:34.722193 1795748 main.go:141] libmachine: (test-preload-490392) waiting for IP...
	I1018 15:19:34.723243 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:34.723825 1795748 main.go:141] libmachine: (test-preload-490392) found domain IP: 192.168.39.200
	I1018 15:19:34.723851 1795748 main.go:141] libmachine: (test-preload-490392) reserving static IP address...
	I1018 15:19:34.723865 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has current primary IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:34.724288 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "test-preload-490392", mac: "52:54:00:0b:af:24", ip: "192.168.39.200"} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:17:55 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:34.724310 1795748 main.go:141] libmachine: (test-preload-490392) reserved static IP address 192.168.39.200 for domain test-preload-490392
	I1018 15:19:34.724328 1795748 main.go:141] libmachine: (test-preload-490392) DBG | skip adding static IP to network mk-test-preload-490392 - found existing host DHCP lease matching {name: "test-preload-490392", mac: "52:54:00:0b:af:24", ip: "192.168.39.200"}
	I1018 15:19:34.724358 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Getting to WaitForSSH function...
	I1018 15:19:34.724395 1795748 main.go:141] libmachine: (test-preload-490392) waiting for SSH...
	I1018 15:19:34.726639 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:34.726981 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:17:55 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:34.727011 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:34.727138 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Using SSH client type: external
	I1018 15:19:34.727168 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa (-rw-------)
	I1018 15:19:34.727220 1795748 main.go:141] libmachine: (test-preload-490392) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:19:34.727242 1795748 main.go:141] libmachine: (test-preload-490392) DBG | About to run SSH command:
	I1018 15:19:34.727279 1795748 main.go:141] libmachine: (test-preload-490392) DBG | exit 0
	I1018 15:19:46.019009 1795748 main.go:141] libmachine: (test-preload-490392) DBG | SSH cmd err, output: exit status 255: 
	I1018 15:19:46.019040 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1018 15:19:46.019048 1795748 main.go:141] libmachine: (test-preload-490392) DBG | command : exit 0
	I1018 15:19:46.019054 1795748 main.go:141] libmachine: (test-preload-490392) DBG | err     : exit status 255
	I1018 15:19:46.019061 1795748 main.go:141] libmachine: (test-preload-490392) DBG | output  : 
	I1018 15:19:49.019650 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Getting to WaitForSSH function...
	I1018 15:19:49.022491 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.022938 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.022985 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.023170 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Using SSH client type: external
	I1018 15:19:49.023195 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa (-rw-------)
	I1018 15:19:49.023239 1795748 main.go:141] libmachine: (test-preload-490392) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:19:49.023256 1795748 main.go:141] libmachine: (test-preload-490392) DBG | About to run SSH command:
	I1018 15:19:49.023283 1795748 main.go:141] libmachine: (test-preload-490392) DBG | exit 0
	I1018 15:19:49.155890 1795748 main.go:141] libmachine: (test-preload-490392) DBG | SSH cmd err, output: <nil>: 
	I1018 15:19:49.156330 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetConfigRaw
	I1018 15:19:49.157034 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetIP
	I1018 15:19:49.159874 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.160209 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.160236 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.160498 1795748 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/config.json ...
	I1018 15:19:49.160710 1795748 machine.go:93] provisionDockerMachine start ...
	I1018 15:19:49.160728 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:49.160936 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.163509 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.163951 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.163980 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.164157 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:49.164364 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.164542 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.164698 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:49.164868 1795748 main.go:141] libmachine: Using SSH client type: native
	I1018 15:19:49.165082 1795748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1018 15:19:49.165093 1795748 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:19:49.274874 1795748 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1018 15:19:49.274926 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetMachineName
	I1018 15:19:49.275187 1795748 buildroot.go:166] provisioning hostname "test-preload-490392"
	I1018 15:19:49.275222 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetMachineName
	I1018 15:19:49.275454 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.278687 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.279102 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.279131 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.279291 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:49.279500 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.279665 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.279820 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:49.279970 1795748 main.go:141] libmachine: Using SSH client type: native
	I1018 15:19:49.280178 1795748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1018 15:19:49.280190 1795748 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-490392 && echo "test-preload-490392" | sudo tee /etc/hostname
	I1018 15:19:49.406833 1795748 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-490392
	
	I1018 15:19:49.406881 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.410091 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.410567 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.410596 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.410857 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:49.411135 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.411373 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.411625 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:49.411854 1795748 main.go:141] libmachine: Using SSH client type: native
	I1018 15:19:49.412075 1795748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1018 15:19:49.412091 1795748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-490392' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-490392/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-490392' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:19:49.529913 1795748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:19:49.529956 1795748 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 15:19:49.529993 1795748 buildroot.go:174] setting up certificates
	I1018 15:19:49.530007 1795748 provision.go:84] configureAuth start
	I1018 15:19:49.530022 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetMachineName
	I1018 15:19:49.530386 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetIP
	I1018 15:19:49.533969 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.534420 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.534457 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.534622 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.537577 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.537973 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.538004 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.538298 1795748 provision.go:143] copyHostCerts
	I1018 15:19:49.538392 1795748 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem, removing ...
	I1018 15:19:49.538418 1795748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem
	I1018 15:19:49.538527 1795748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 15:19:49.538659 1795748 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem, removing ...
	I1018 15:19:49.538672 1795748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem
	I1018 15:19:49.538714 1795748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 15:19:49.538812 1795748 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem, removing ...
	I1018 15:19:49.538824 1795748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem
	I1018 15:19:49.538865 1795748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 15:19:49.538942 1795748 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.test-preload-490392 san=[127.0.0.1 192.168.39.200 localhost minikube test-preload-490392]
	I1018 15:19:49.595853 1795748 provision.go:177] copyRemoteCerts
	I1018 15:19:49.595924 1795748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:19:49.595954 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.599333 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.599816 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.599848 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.600098 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:49.600316 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.600500 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:49.600701 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:19:49.684168 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:19:49.715321 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 15:19:49.746882 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:19:49.778929 1795748 provision.go:87] duration metric: took 248.907324ms to configureAuth
	I1018 15:19:49.778964 1795748 buildroot.go:189] setting minikube options for container-runtime
	I1018 15:19:49.779172 1795748 config.go:182] Loaded profile config "test-preload-490392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 15:19:49.779278 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:49.782292 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.782787 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:49.782821 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:49.783014 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:49.783237 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.783439 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:49.783548 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:49.783665 1795748 main.go:141] libmachine: Using SSH client type: native
	I1018 15:19:49.783937 1795748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1018 15:19:49.783956 1795748 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:19:50.039139 1795748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:19:50.039172 1795748 machine.go:96] duration metric: took 878.4489ms to provisionDockerMachine
	I1018 15:19:50.039189 1795748 start.go:293] postStartSetup for "test-preload-490392" (driver="kvm2")
	I1018 15:19:50.039201 1795748 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:19:50.039221 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:50.039594 1795748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:19:50.039628 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:50.043269 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.043750 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:50.043782 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.044013 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:50.044234 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:50.044423 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:50.044560 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:19:50.129464 1795748 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:19:50.135024 1795748 info.go:137] Remote host: Buildroot 2025.02
	I1018 15:19:50.135054 1795748 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 15:19:50.135123 1795748 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 15:19:50.135199 1795748 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem -> 17597922.pem in /etc/ssl/certs
	I1018 15:19:50.135288 1795748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:19:50.147446 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:19:50.179229 1795748 start.go:296] duration metric: took 140.022821ms for postStartSetup
	I1018 15:19:50.179274 1795748 fix.go:56] duration metric: took 16.845793171s for fixHost
	I1018 15:19:50.179297 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:50.182694 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.183119 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:50.183151 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.183373 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:50.183625 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:50.183760 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:50.183870 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:50.184069 1795748 main.go:141] libmachine: Using SSH client type: native
	I1018 15:19:50.184380 1795748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1018 15:19:50.184398 1795748 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 15:19:50.291473 1795748 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760800790.252125718
	
	I1018 15:19:50.291499 1795748 fix.go:216] guest clock: 1760800790.252125718
	I1018 15:19:50.291507 1795748 fix.go:229] Guest: 2025-10-18 15:19:50.252125718 +0000 UTC Remote: 2025-10-18 15:19:50.179278573 +0000 UTC m=+19.440471592 (delta=72.847145ms)
	I1018 15:19:50.291528 1795748 fix.go:200] guest clock delta is within tolerance: 72.847145ms
	I1018 15:19:50.291533 1795748 start.go:83] releasing machines lock for "test-preload-490392", held for 16.958068256s
	I1018 15:19:50.291560 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:50.291919 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetIP
	I1018 15:19:50.295256 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.295685 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:50.295712 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.295883 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:50.296511 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:50.296726 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:19:50.296816 1795748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:19:50.296895 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:50.296942 1795748 ssh_runner.go:195] Run: cat /version.json
	I1018 15:19:50.296965 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:19:50.300575 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.300611 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.301067 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:50.301119 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.301171 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:50.301190 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:50.301358 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:50.301405 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:19:50.301634 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:50.301643 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:19:50.301857 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:50.301864 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:19:50.302037 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:19:50.302036 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:19:50.379990 1795748 ssh_runner.go:195] Run: systemctl --version
	I1018 15:19:50.406884 1795748 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:19:50.556364 1795748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:19:50.563915 1795748 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:19:50.564008 1795748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:19:50.586279 1795748 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:19:50.586310 1795748 start.go:495] detecting cgroup driver to use...
	I1018 15:19:50.586393 1795748 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:19:50.612558 1795748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:19:50.631465 1795748 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:19:50.631534 1795748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:19:50.650859 1795748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:19:50.668834 1795748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:19:50.817309 1795748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:19:51.030205 1795748 docker.go:234] disabling docker service ...
	I1018 15:19:51.030281 1795748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:19:51.047009 1795748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:19:51.062804 1795748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:19:51.221377 1795748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:19:51.365655 1795748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:19:51.382508 1795748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:19:51.406411 1795748 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1018 15:19:51.406486 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.420275 1795748 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 15:19:51.420374 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.433892 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.452234 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.467763 1795748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:19:51.481885 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.495060 1795748 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.517243 1795748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:19:51.530991 1795748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:19:51.542956 1795748 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 15:19:51.543042 1795748 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 15:19:51.564958 1795748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:19:51.581093 1795748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:19:51.728263 1795748 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:19:51.844634 1795748 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:19:51.844722 1795748 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:19:51.851222 1795748 start.go:563] Will wait 60s for crictl version
	I1018 15:19:51.851309 1795748 ssh_runner.go:195] Run: which crictl
	I1018 15:19:51.856691 1795748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 15:19:51.899643 1795748 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 15:19:51.899733 1795748 ssh_runner.go:195] Run: crio --version
	I1018 15:19:51.930300 1795748 ssh_runner.go:195] Run: crio --version
	I1018 15:19:51.964750 1795748 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1018 15:19:51.966202 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetIP
	I1018 15:19:51.969429 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:51.969886 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:19:51.969918 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:19:51.970191 1795748 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 15:19:51.977806 1795748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:19:51.994686 1795748 kubeadm.go:883] updating cluster {Name:test-preload-490392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-490392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:19:51.994873 1795748 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 15:19:51.994940 1795748 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:19:52.039324 1795748 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1018 15:19:52.039437 1795748 ssh_runner.go:195] Run: which lz4
	I1018 15:19:52.044380 1795748 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 15:19:52.050109 1795748 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 15:19:52.050159 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1018 15:19:53.662391 1795748 crio.go:462] duration metric: took 1.618064574s to copy over tarball
	I1018 15:19:53.662491 1795748 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 15:19:55.413641 1795748 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.75111444s)
	I1018 15:19:55.413677 1795748 crio.go:469] duration metric: took 1.751246463s to extract the tarball
	I1018 15:19:55.413688 1795748 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 15:19:55.454955 1795748 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:19:55.497806 1795748 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:19:55.497842 1795748 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:19:55.497851 1795748 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.32.0 crio true true} ...
	I1018 15:19:55.498004 1795748 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-490392 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-490392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:19:55.498078 1795748 ssh_runner.go:195] Run: crio config
	I1018 15:19:55.546476 1795748 cni.go:84] Creating CNI manager for ""
	I1018 15:19:55.546511 1795748 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:19:55.546531 1795748 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:19:55.546553 1795748 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-490392 NodeName:test-preload-490392 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:19:55.546688 1795748 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-490392"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:19:55.546750 1795748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1018 15:19:55.560803 1795748 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:19:55.560899 1795748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:19:55.573849 1795748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1018 15:19:55.596164 1795748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:19:55.617847 1795748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 15:19:55.641522 1795748 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1018 15:19:55.646046 1795748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:19:55.661807 1795748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:19:55.811563 1795748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:19:55.834081 1795748 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392 for IP: 192.168.39.200
	I1018 15:19:55.834113 1795748 certs.go:195] generating shared ca certs ...
	I1018 15:19:55.834138 1795748 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:19:55.834322 1795748 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 15:19:55.834405 1795748 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 15:19:55.834421 1795748 certs.go:257] generating profile certs ...
	I1018 15:19:55.834644 1795748 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.key
	I1018 15:19:55.834765 1795748 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/apiserver.key.2af82aa3
	I1018 15:19:55.834837 1795748 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/proxy-client.key
	I1018 15:19:55.834989 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem (1338 bytes)
	W1018 15:19:55.835034 1795748 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792_empty.pem, impossibly tiny 0 bytes
	I1018 15:19:55.835048 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 15:19:55.835081 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:19:55.835112 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:19:55.835145 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 15:19:55.835214 1795748 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:19:55.835845 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:19:55.884569 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:19:55.925540 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:19:55.958271 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 15:19:55.990406 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 15:19:56.023755 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:19:56.056489 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:19:56.089864 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:19:56.122718 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:19:56.155331 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem --> /usr/share/ca-certificates/1759792.pem (1338 bytes)
	I1018 15:19:56.188802 1795748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /usr/share/ca-certificates/17597922.pem (1708 bytes)
	I1018 15:19:56.222716 1795748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:19:56.246064 1795748 ssh_runner.go:195] Run: openssl version
	I1018 15:19:56.253667 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:19:56.268357 1795748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:19:56.274234 1795748 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:19:56.274318 1795748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:19:56.282305 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:19:56.296662 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1759792.pem && ln -fs /usr/share/ca-certificates/1759792.pem /etc/ssl/certs/1759792.pem"
	I1018 15:19:56.311212 1795748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1759792.pem
	I1018 15:19:56.317172 1795748 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:22 /usr/share/ca-certificates/1759792.pem
	I1018 15:19:56.317250 1795748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1759792.pem
	I1018 15:19:56.325109 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1759792.pem /etc/ssl/certs/51391683.0"
	I1018 15:19:56.339921 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17597922.pem && ln -fs /usr/share/ca-certificates/17597922.pem /etc/ssl/certs/17597922.pem"
	I1018 15:19:56.355103 1795748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17597922.pem
	I1018 15:19:56.361073 1795748 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:22 /usr/share/ca-certificates/17597922.pem
	I1018 15:19:56.361139 1795748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17597922.pem
	I1018 15:19:56.368936 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17597922.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:19:56.383189 1795748 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:19:56.389169 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:19:56.397278 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:19:56.405396 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:19:56.413957 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:19:56.422454 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:19:56.430740 1795748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:19:56.438858 1795748 kubeadm.go:400] StartCluster: {Name:test-preload-490392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-490392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:19:56.438950 1795748 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:19:56.439021 1795748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:19:56.480387 1795748 cri.go:89] found id: ""
	I1018 15:19:56.480471 1795748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:19:56.493523 1795748 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:19:56.493553 1795748 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:19:56.493603 1795748 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:19:56.506413 1795748 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:19:56.506929 1795748 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-490392" does not appear in /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:19:56.507064 1795748 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1755824/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-490392" cluster setting kubeconfig missing "test-preload-490392" context setting]
	I1018 15:19:56.507425 1795748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:19:56.507974 1795748 kapi.go:59] client config for test-preload-490392: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:19:56.508409 1795748 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 15:19:56.508426 1795748 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 15:19:56.508430 1795748 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 15:19:56.508434 1795748 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 15:19:56.508438 1795748 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 15:19:56.508785 1795748 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:19:56.523869 1795748 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.200
	I1018 15:19:56.523919 1795748 kubeadm.go:1160] stopping kube-system containers ...
	I1018 15:19:56.523939 1795748 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 15:19:56.524017 1795748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:19:56.582158 1795748 cri.go:89] found id: ""
	I1018 15:19:56.582249 1795748 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 15:19:56.613229 1795748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:19:56.626465 1795748 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:19:56.626491 1795748 kubeadm.go:157] found existing configuration files:
	
	I1018 15:19:56.626546 1795748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:19:56.638220 1795748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:19:56.638289 1795748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:19:56.651123 1795748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:19:56.662728 1795748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:19:56.662811 1795748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:19:56.675855 1795748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:19:56.687451 1795748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:19:56.687533 1795748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:19:56.700089 1795748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:19:56.711336 1795748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:19:56.711413 1795748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:19:56.724042 1795748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:19:56.736400 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:19:56.796240 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:19:57.872272 1795748 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075993118s)
	I1018 15:19:57.872386 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:19:58.131170 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:19:58.210184 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:19:58.288379 1795748 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:19:58.288504 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:19:58.789593 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:19:59.289388 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:19:59.789441 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:20:00.288990 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:20:00.789266 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:20:00.818937 1795748 api_server.go:72] duration metric: took 2.530583576s to wait for apiserver process to appear ...
	I1018 15:20:00.818977 1795748 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:20:00.819002 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:03.035925 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:20:03.035961 1795748 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:20:03.035979 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:03.079769 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:20:03.079804 1795748 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:20:03.319202 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:03.333888 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:20:03.333918 1795748 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:20:03.820127 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:03.825307 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:20:03.825333 1795748 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:20:04.320050 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:04.324705 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1018 15:20:04.334465 1795748 api_server.go:141] control plane version: v1.32.0
	I1018 15:20:04.334495 1795748 api_server.go:131] duration metric: took 3.515510403s to wait for apiserver health ...
	I1018 15:20:04.334505 1795748 cni.go:84] Creating CNI manager for ""
	I1018 15:20:04.334511 1795748 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:20:04.336463 1795748 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 15:20:04.338107 1795748 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 15:20:04.353694 1795748 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 15:20:04.378449 1795748 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:20:04.385299 1795748 system_pods.go:59] 7 kube-system pods found
	I1018 15:20:04.385377 1795748 system_pods.go:61] "coredns-668d6bf9bc-jrt2c" [52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:20:04.385390 1795748 system_pods.go:61] "etcd-test-preload-490392" [b40dcbcf-867b-449e-af25-31827a308dc6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:20:04.385402 1795748 system_pods.go:61] "kube-apiserver-test-preload-490392" [f40c6cb4-3915-4418-88b5-427ac467d207] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:20:04.385412 1795748 system_pods.go:61] "kube-controller-manager-test-preload-490392" [ca7782b9-7b33-4f7f-b9c5-f5dba2b0e04d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:20:04.385425 1795748 system_pods.go:61] "kube-proxy-t8sg5" [291161a4-db2b-4319-b46a-f7161138422d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 15:20:04.385435 1795748 system_pods.go:61] "kube-scheduler-test-preload-490392" [8aa3d73f-d7b4-4f57-8ede-f6a8e0da25a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:20:04.385445 1795748 system_pods.go:61] "storage-provisioner" [21a244e9-183d-4167-9115-a5c775e1b585] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:20:04.385458 1795748 system_pods.go:74] duration metric: took 6.980444ms to wait for pod list to return data ...
	I1018 15:20:04.385474 1795748 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:20:04.389405 1795748 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:20:04.389445 1795748 node_conditions.go:123] node cpu capacity is 2
	I1018 15:20:04.389462 1795748 node_conditions.go:105] duration metric: took 3.981539ms to run NodePressure ...
	I1018 15:20:04.389518 1795748 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:20:04.672417 1795748 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 15:20:04.676512 1795748 kubeadm.go:743] kubelet initialised
	I1018 15:20:04.676536 1795748 kubeadm.go:744] duration metric: took 4.090917ms waiting for restarted kubelet to initialise ...
	I1018 15:20:04.676552 1795748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:20:04.693291 1795748 ops.go:34] apiserver oom_adj: -16
	I1018 15:20:04.693324 1795748 kubeadm.go:601] duration metric: took 8.199762235s to restartPrimaryControlPlane
	I1018 15:20:04.693361 1795748 kubeadm.go:402] duration metric: took 8.254492106s to StartCluster
	I1018 15:20:04.693389 1795748 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:20:04.693469 1795748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:20:04.694090 1795748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:20:04.694328 1795748 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:20:04.694455 1795748 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:20:04.694567 1795748 addons.go:69] Setting storage-provisioner=true in profile "test-preload-490392"
	I1018 15:20:04.694582 1795748 config.go:182] Loaded profile config "test-preload-490392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 15:20:04.694595 1795748 addons.go:238] Setting addon storage-provisioner=true in "test-preload-490392"
	W1018 15:20:04.694607 1795748 addons.go:247] addon storage-provisioner should already be in state true
	I1018 15:20:04.694645 1795748 host.go:66] Checking if "test-preload-490392" exists ...
	I1018 15:20:04.694637 1795748 addons.go:69] Setting default-storageclass=true in profile "test-preload-490392"
	I1018 15:20:04.694777 1795748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-490392"
	I1018 15:20:04.695111 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:20:04.695151 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:20:04.695181 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:20:04.695227 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:20:04.696292 1795748 out.go:179] * Verifying Kubernetes components...
	I1018 15:20:04.698075 1795748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:20:04.709518 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
	I1018 15:20:04.710110 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:20:04.710635 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:20:04.710659 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:20:04.711057 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:20:04.711677 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:20:04.711709 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:20:04.711723 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40691
	I1018 15:20:04.712244 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:20:04.712791 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:20:04.712819 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:20:04.713213 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:20:04.713424 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetState
	I1018 15:20:04.715861 1795748 kapi.go:59] client config for test-preload-490392: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:20:04.716188 1795748 addons.go:238] Setting addon default-storageclass=true in "test-preload-490392"
	W1018 15:20:04.716207 1795748 addons.go:247] addon default-storageclass should already be in state true
	I1018 15:20:04.716240 1795748 host.go:66] Checking if "test-preload-490392" exists ...
	I1018 15:20:04.716596 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:20:04.716645 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:20:04.726562 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1018 15:20:04.727187 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:20:04.727809 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:20:04.727842 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:20:04.728238 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:20:04.728508 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetState
	I1018 15:20:04.730579 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:20:04.732983 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1018 15:20:04.733138 1795748 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:20:04.733601 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:20:04.734181 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:20:04.734201 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:20:04.734587 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:20:04.734621 1795748 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:20:04.734639 1795748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:20:04.734666 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:20:04.735236 1795748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:20:04.735297 1795748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:20:04.738626 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:20:04.739161 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:20:04.739191 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:20:04.739421 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:20:04.739670 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:20:04.739915 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:20:04.740169 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:20:04.750978 1795748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I1018 15:20:04.751516 1795748 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:20:04.752054 1795748 main.go:141] libmachine: Using API Version  1
	I1018 15:20:04.752087 1795748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:20:04.752556 1795748 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:20:04.752800 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetState
	I1018 15:20:04.755110 1795748 main.go:141] libmachine: (test-preload-490392) Calling .DriverName
	I1018 15:20:04.755414 1795748 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:20:04.755434 1795748 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:20:04.755458 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHHostname
	I1018 15:20:04.759492 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:20:04.760076 1795748 main.go:141] libmachine: (test-preload-490392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:af:24", ip: ""} in network mk-test-preload-490392: {Iface:virbr1 ExpiryTime:2025-10-18 16:19:45 +0000 UTC Type:0 Mac:52:54:00:0b:af:24 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:test-preload-490392 Clientid:01:52:54:00:0b:af:24}
	I1018 15:20:04.760109 1795748 main.go:141] libmachine: (test-preload-490392) DBG | domain test-preload-490392 has defined IP address 192.168.39.200 and MAC address 52:54:00:0b:af:24 in network mk-test-preload-490392
	I1018 15:20:04.760383 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHPort
	I1018 15:20:04.760613 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHKeyPath
	I1018 15:20:04.760777 1795748 main.go:141] libmachine: (test-preload-490392) Calling .GetSSHUsername
	I1018 15:20:04.760930 1795748 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/test-preload-490392/id_rsa Username:docker}
	I1018 15:20:04.909092 1795748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:20:04.929894 1795748 node_ready.go:35] waiting up to 6m0s for node "test-preload-490392" to be "Ready" ...
	I1018 15:20:04.933069 1795748 node_ready.go:49] node "test-preload-490392" is "Ready"
	I1018 15:20:04.933117 1795748 node_ready.go:38] duration metric: took 3.16112ms for node "test-preload-490392" to be "Ready" ...
	I1018 15:20:04.933135 1795748 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:20:04.933195 1795748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:20:04.953009 1795748 api_server.go:72] duration metric: took 258.634169ms to wait for apiserver process to appear ...
	I1018 15:20:04.953045 1795748 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:20:04.953072 1795748 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1018 15:20:04.958402 1795748 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1018 15:20:04.960298 1795748 api_server.go:141] control plane version: v1.32.0
	I1018 15:20:04.960327 1795748 api_server.go:131] duration metric: took 7.272889ms to wait for apiserver health ...
	I1018 15:20:04.960349 1795748 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:20:04.964514 1795748 system_pods.go:59] 7 kube-system pods found
	I1018 15:20:04.964548 1795748 system_pods.go:61] "coredns-668d6bf9bc-jrt2c" [52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:20:04.964558 1795748 system_pods.go:61] "etcd-test-preload-490392" [b40dcbcf-867b-449e-af25-31827a308dc6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:20:04.964570 1795748 system_pods.go:61] "kube-apiserver-test-preload-490392" [f40c6cb4-3915-4418-88b5-427ac467d207] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:20:04.964578 1795748 system_pods.go:61] "kube-controller-manager-test-preload-490392" [ca7782b9-7b33-4f7f-b9c5-f5dba2b0e04d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:20:04.964590 1795748 system_pods.go:61] "kube-proxy-t8sg5" [291161a4-db2b-4319-b46a-f7161138422d] Running
	I1018 15:20:04.964600 1795748 system_pods.go:61] "kube-scheduler-test-preload-490392" [8aa3d73f-d7b4-4f57-8ede-f6a8e0da25a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:20:04.964606 1795748 system_pods.go:61] "storage-provisioner" [21a244e9-183d-4167-9115-a5c775e1b585] Running
	I1018 15:20:04.964617 1795748 system_pods.go:74] duration metric: took 4.258507ms to wait for pod list to return data ...
	I1018 15:20:04.964632 1795748 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:20:04.967431 1795748 default_sa.go:45] found service account: "default"
	I1018 15:20:04.967453 1795748 default_sa.go:55] duration metric: took 2.813503ms for default service account to be created ...
	I1018 15:20:04.967462 1795748 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:20:04.970269 1795748 system_pods.go:86] 7 kube-system pods found
	I1018 15:20:04.970304 1795748 system_pods.go:89] "coredns-668d6bf9bc-jrt2c" [52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:20:04.970316 1795748 system_pods.go:89] "etcd-test-preload-490392" [b40dcbcf-867b-449e-af25-31827a308dc6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:20:04.970327 1795748 system_pods.go:89] "kube-apiserver-test-preload-490392" [f40c6cb4-3915-4418-88b5-427ac467d207] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:20:04.970336 1795748 system_pods.go:89] "kube-controller-manager-test-preload-490392" [ca7782b9-7b33-4f7f-b9c5-f5dba2b0e04d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:20:04.970369 1795748 system_pods.go:89] "kube-proxy-t8sg5" [291161a4-db2b-4319-b46a-f7161138422d] Running
	I1018 15:20:04.970383 1795748 system_pods.go:89] "kube-scheduler-test-preload-490392" [8aa3d73f-d7b4-4f57-8ede-f6a8e0da25a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:20:04.970394 1795748 system_pods.go:89] "storage-provisioner" [21a244e9-183d-4167-9115-a5c775e1b585] Running
	I1018 15:20:04.970404 1795748 system_pods.go:126] duration metric: took 2.935234ms to wait for k8s-apps to be running ...
	I1018 15:20:04.970416 1795748 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:20:04.970468 1795748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:20:04.990093 1795748 system_svc.go:56] duration metric: took 19.661766ms WaitForService to wait for kubelet
	I1018 15:20:04.990132 1795748 kubeadm.go:586] duration metric: took 295.768138ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:20:04.990152 1795748 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:20:04.993464 1795748 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:20:04.993489 1795748 node_conditions.go:123] node cpu capacity is 2
	I1018 15:20:04.993501 1795748 node_conditions.go:105] duration metric: took 3.343612ms to run NodePressure ...
	I1018 15:20:04.993513 1795748 start.go:241] waiting for startup goroutines ...
	I1018 15:20:05.065301 1795748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:20:05.117867 1795748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:20:05.834735 1795748 main.go:141] libmachine: Making call to close driver server
	I1018 15:20:05.834769 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Close
	I1018 15:20:05.834807 1795748 main.go:141] libmachine: Making call to close driver server
	I1018 15:20:05.834832 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Close
	I1018 15:20:05.835103 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Closing plugin on server side
	I1018 15:20:05.835114 1795748 main.go:141] libmachine: Successfully made call to close driver server
	I1018 15:20:05.835124 1795748 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 15:20:05.835134 1795748 main.go:141] libmachine: Making call to close driver server
	I1018 15:20:05.835140 1795748 main.go:141] libmachine: Successfully made call to close driver server
	I1018 15:20:05.835143 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Close
	I1018 15:20:05.835150 1795748 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 15:20:05.835158 1795748 main.go:141] libmachine: Making call to close driver server
	I1018 15:20:05.835169 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Close
	I1018 15:20:05.835423 1795748 main.go:141] libmachine: Successfully made call to close driver server
	I1018 15:20:05.835441 1795748 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 15:20:05.835469 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Closing plugin on server side
	I1018 15:20:05.835476 1795748 main.go:141] libmachine: (test-preload-490392) DBG | Closing plugin on server side
	I1018 15:20:05.835496 1795748 main.go:141] libmachine: Successfully made call to close driver server
	I1018 15:20:05.835507 1795748 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 15:20:05.844221 1795748 main.go:141] libmachine: Making call to close driver server
	I1018 15:20:05.844242 1795748 main.go:141] libmachine: (test-preload-490392) Calling .Close
	I1018 15:20:05.844573 1795748 main.go:141] libmachine: Successfully made call to close driver server
	I1018 15:20:05.844594 1795748 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 15:20:05.846562 1795748 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 15:20:05.847950 1795748 addons.go:514] duration metric: took 1.153501921s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:20:05.848003 1795748 start.go:246] waiting for cluster config update ...
	I1018 15:20:05.848018 1795748 start.go:255] writing updated cluster config ...
	I1018 15:20:05.848376 1795748 ssh_runner.go:195] Run: rm -f paused
	I1018 15:20:05.854385 1795748 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:20:05.854911 1795748 kapi.go:59] client config for test-preload-490392: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/test-preload-490392/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:20:05.859021 1795748 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-jrt2c" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:20:07.865905 1795748 pod_ready.go:104] pod "coredns-668d6bf9bc-jrt2c" is not "Ready", error: <nil>
	W1018 15:20:10.366018 1795748 pod_ready.go:104] pod "coredns-668d6bf9bc-jrt2c" is not "Ready", error: <nil>
	I1018 15:20:12.366752 1795748 pod_ready.go:94] pod "coredns-668d6bf9bc-jrt2c" is "Ready"
	I1018 15:20:12.366788 1795748 pod_ready.go:86] duration metric: took 6.507744s for pod "coredns-668d6bf9bc-jrt2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:12.370060 1795748 pod_ready.go:83] waiting for pod "etcd-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:12.375456 1795748 pod_ready.go:94] pod "etcd-test-preload-490392" is "Ready"
	I1018 15:20:12.375495 1795748 pod_ready.go:86] duration metric: took 5.40186ms for pod "etcd-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:12.378125 1795748 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:20:14.384500 1795748 pod_ready.go:104] pod "kube-apiserver-test-preload-490392" is not "Ready", error: <nil>
	I1018 15:20:15.884071 1795748 pod_ready.go:94] pod "kube-apiserver-test-preload-490392" is "Ready"
	I1018 15:20:15.884108 1795748 pod_ready.go:86] duration metric: took 3.505947276s for pod "kube-apiserver-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:15.886435 1795748 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:15.893531 1795748 pod_ready.go:94] pod "kube-controller-manager-test-preload-490392" is "Ready"
	I1018 15:20:15.893557 1795748 pod_ready.go:86] duration metric: took 7.10004ms for pod "kube-controller-manager-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:15.896306 1795748 pod_ready.go:83] waiting for pod "kube-proxy-t8sg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:15.963081 1795748 pod_ready.go:94] pod "kube-proxy-t8sg5" is "Ready"
	I1018 15:20:15.963123 1795748 pod_ready.go:86] duration metric: took 66.789138ms for pod "kube-proxy-t8sg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:16.162947 1795748 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:20:18.169436 1795748 pod_ready.go:104] pod "kube-scheduler-test-preload-490392" is not "Ready", error: <nil>
	I1018 15:20:19.670728 1795748 pod_ready.go:94] pod "kube-scheduler-test-preload-490392" is "Ready"
	I1018 15:20:19.670759 1795748 pod_ready.go:86] duration metric: took 3.507783833s for pod "kube-scheduler-test-preload-490392" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:20:19.670778 1795748 pod_ready.go:40] duration metric: took 13.816350878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:20:19.715384 1795748 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1018 15:20:19.717491 1795748 out.go:203] 
	W1018 15:20:19.718920 1795748 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1018 15:20:19.720203 1795748 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1018 15:20:19.721378 1795748 out.go:179] * Done! kubectl is now configured to use "test-preload-490392" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.648860993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800820648837446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7d41b26-27b7-4d7b-b2c3-db819bda52f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.651018569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0cc4969-71c4-4178-8d73-2366308a09af name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.651437817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0cc4969-71c4-4178-8d73-2366308a09af name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.651746087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d79eaa9c2dd80d5d4451c102734f0a0fb2954a80d9895def2f5534bd9d8df452,PodSandboxId:47ab06d933cd49959710c231075d14df66ed86b04c8bf7288e417b9b352a7818,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760800807344057401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jrt2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7877d88940dc752bbf996c8c2b2b7b8317048b4412c326b3a08e5a61b7ae2c60,PodSandboxId:6e6c8789017e08c15caa366534e8a31c123afb6ae19da11318cb4edb576a51e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760800803673998688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 21a244e9-183d-4167-9115-a5c775e1b585,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65967e1971f7e4b76b0600a1fef6e3bf362646a19f83702f9a07c7cfe154b37,PodSandboxId:2924589b43c4b1fdf7bd274dd033ca11f1c538c29ec2b6ccb731a4b45c274bda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760800803663392776,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t8sg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29
1161a4-db2b-4319-b46a-f7161138422d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03aa0e565b08769ce618285d407695fdc7ec71bbe94191abcf258e965fe8ac90,PodSandboxId:d0d5a3ccacfcec3a4a9478653cb3dca37fac1c1c658d09f04e04b6b12ea32322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760800800271689642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1fb3cb2c2455426fd11af72b71ecdb8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2199e5fae75060fab7157ce51f936b6dfdbeff68c8ebebe77c6112fb26a3c304,PodSandboxId:257294646ee9e0ae039823a4656ba2332fc42be5afc5718b57fefc4476f2a21b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760800800317406387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f955c315d22cb35c43eef94d12f509,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:232f504cdaacb717f45c979354a8906c092afd8b6d5cd83186db5e9c7969f9ae,PodSandboxId:2202eaa90a391724d7349dfd60be484029546ec9a73d8ffa4edb52736f3ed771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760800800274192606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7284663cf688fa77c9c4747f66f5fe03,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0df30ea0826ab9aa577a8679f818c7829f9ae1708fee3d576170ca96fdeed3,PodSandboxId:a9cb4f7ffa688293840a48f1e5c2cdae96e2559baa4bca8adf5fac4c03533e16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760800800253818268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e9166103d74230d28a3a8b1f7fbd94,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0cc4969-71c4-4178-8d73-2366308a09af name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.693048169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48206100-51b1-4065-9632-02f5167d7710 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.693138228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48206100-51b1-4065-9632-02f5167d7710 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.695651628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca84bfac-39a0-4e54-920e-a65baf46992c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.696927624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800820696900082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca84bfac-39a0-4e54-920e-a65baf46992c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.697965784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81607511-8611-4092-9d4d-8d8c23f29b70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.698248583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81607511-8611-4092-9d4d-8d8c23f29b70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.698779898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d79eaa9c2dd80d5d4451c102734f0a0fb2954a80d9895def2f5534bd9d8df452,PodSandboxId:47ab06d933cd49959710c231075d14df66ed86b04c8bf7288e417b9b352a7818,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760800807344057401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jrt2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7877d88940dc752bbf996c8c2b2b7b8317048b4412c326b3a08e5a61b7ae2c60,PodSandboxId:6e6c8789017e08c15caa366534e8a31c123afb6ae19da11318cb4edb576a51e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760800803673998688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 21a244e9-183d-4167-9115-a5c775e1b585,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65967e1971f7e4b76b0600a1fef6e3bf362646a19f83702f9a07c7cfe154b37,PodSandboxId:2924589b43c4b1fdf7bd274dd033ca11f1c538c29ec2b6ccb731a4b45c274bda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760800803663392776,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t8sg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29
1161a4-db2b-4319-b46a-f7161138422d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03aa0e565b08769ce618285d407695fdc7ec71bbe94191abcf258e965fe8ac90,PodSandboxId:d0d5a3ccacfcec3a4a9478653cb3dca37fac1c1c658d09f04e04b6b12ea32322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760800800271689642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1fb3cb2c2455426fd11af72b71ecdb8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2199e5fae75060fab7157ce51f936b6dfdbeff68c8ebebe77c6112fb26a3c304,PodSandboxId:257294646ee9e0ae039823a4656ba2332fc42be5afc5718b57fefc4476f2a21b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760800800317406387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f955c315d22cb35c43eef94d12f509,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:232f504cdaacb717f45c979354a8906c092afd8b6d5cd83186db5e9c7969f9ae,PodSandboxId:2202eaa90a391724d7349dfd60be484029546ec9a73d8ffa4edb52736f3ed771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760800800274192606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7284663cf688fa77c9c4747f66f5fe03,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0df30ea0826ab9aa577a8679f818c7829f9ae1708fee3d576170ca96fdeed3,PodSandboxId:a9cb4f7ffa688293840a48f1e5c2cdae96e2559baa4bca8adf5fac4c03533e16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760800800253818268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e9166103d74230d28a3a8b1f7fbd94,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81607511-8611-4092-9d4d-8d8c23f29b70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.739881487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b889531-2607-4854-af57-db6723b208ce name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.740043862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b889531-2607-4854-af57-db6723b208ce name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.742048503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d043145-f678-4e93-a8a5-8f543767acd3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.742605741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800820742584608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d043145-f678-4e93-a8a5-8f543767acd3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.743071485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2775fb0b-71a8-4d3b-b7ad-aebb67eb6dcc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.743167562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2775fb0b-71a8-4d3b-b7ad-aebb67eb6dcc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.743416981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d79eaa9c2dd80d5d4451c102734f0a0fb2954a80d9895def2f5534bd9d8df452,PodSandboxId:47ab06d933cd49959710c231075d14df66ed86b04c8bf7288e417b9b352a7818,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760800807344057401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jrt2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7877d88940dc752bbf996c8c2b2b7b8317048b4412c326b3a08e5a61b7ae2c60,PodSandboxId:6e6c8789017e08c15caa366534e8a31c123afb6ae19da11318cb4edb576a51e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760800803673998688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 21a244e9-183d-4167-9115-a5c775e1b585,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65967e1971f7e4b76b0600a1fef6e3bf362646a19f83702f9a07c7cfe154b37,PodSandboxId:2924589b43c4b1fdf7bd274dd033ca11f1c538c29ec2b6ccb731a4b45c274bda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760800803663392776,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t8sg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29
1161a4-db2b-4319-b46a-f7161138422d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03aa0e565b08769ce618285d407695fdc7ec71bbe94191abcf258e965fe8ac90,PodSandboxId:d0d5a3ccacfcec3a4a9478653cb3dca37fac1c1c658d09f04e04b6b12ea32322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760800800271689642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1fb3cb2c2455426fd11af72b71ecdb8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2199e5fae75060fab7157ce51f936b6dfdbeff68c8ebebe77c6112fb26a3c304,PodSandboxId:257294646ee9e0ae039823a4656ba2332fc42be5afc5718b57fefc4476f2a21b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760800800317406387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f955c315d22cb35c43eef94d12f509,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:232f504cdaacb717f45c979354a8906c092afd8b6d5cd83186db5e9c7969f9ae,PodSandboxId:2202eaa90a391724d7349dfd60be484029546ec9a73d8ffa4edb52736f3ed771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760800800274192606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7284663cf688fa77c9c4747f66f5fe03,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0df30ea0826ab9aa577a8679f818c7829f9ae1708fee3d576170ca96fdeed3,PodSandboxId:a9cb4f7ffa688293840a48f1e5c2cdae96e2559baa4bca8adf5fac4c03533e16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760800800253818268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e9166103d74230d28a3a8b1f7fbd94,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2775fb0b-71a8-4d3b-b7ad-aebb67eb6dcc name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.780738361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7014a83-8d4e-4c23-b66c-86370d31cce6 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.780826582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7014a83-8d4e-4c23-b66c-86370d31cce6 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.782189996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e327c76a-0fea-446b-ab6f-8e15e248c0ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.782675224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800820782652118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e327c76a-0fea-446b-ab6f-8e15e248c0ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.783523733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e9a403b-e099-407b-8674-287d0a16cb4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.783601376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e9a403b-e099-407b-8674-287d0a16cb4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:20:20 test-preload-490392 crio[833]: time="2025-10-18 15:20:20.783760888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d79eaa9c2dd80d5d4451c102734f0a0fb2954a80d9895def2f5534bd9d8df452,PodSandboxId:47ab06d933cd49959710c231075d14df66ed86b04c8bf7288e417b9b352a7818,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760800807344057401,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jrt2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7877d88940dc752bbf996c8c2b2b7b8317048b4412c326b3a08e5a61b7ae2c60,PodSandboxId:6e6c8789017e08c15caa366534e8a31c123afb6ae19da11318cb4edb576a51e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760800803673998688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 21a244e9-183d-4167-9115-a5c775e1b585,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65967e1971f7e4b76b0600a1fef6e3bf362646a19f83702f9a07c7cfe154b37,PodSandboxId:2924589b43c4b1fdf7bd274dd033ca11f1c538c29ec2b6ccb731a4b45c274bda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760800803663392776,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t8sg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29
1161a4-db2b-4319-b46a-f7161138422d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03aa0e565b08769ce618285d407695fdc7ec71bbe94191abcf258e965fe8ac90,PodSandboxId:d0d5a3ccacfcec3a4a9478653cb3dca37fac1c1c658d09f04e04b6b12ea32322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760800800271689642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1fb3cb2c2455426fd11af72b71ecdb8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2199e5fae75060fab7157ce51f936b6dfdbeff68c8ebebe77c6112fb26a3c304,PodSandboxId:257294646ee9e0ae039823a4656ba2332fc42be5afc5718b57fefc4476f2a21b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760800800317406387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f955c315d22cb35c43eef94d12f509,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:232f504cdaacb717f45c979354a8906c092afd8b6d5cd83186db5e9c7969f9ae,PodSandboxId:2202eaa90a391724d7349dfd60be484029546ec9a73d8ffa4edb52736f3ed771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760800800274192606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7284663cf688fa77c9c4747f66f5fe03,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0df30ea0826ab9aa577a8679f818c7829f9ae1708fee3d576170ca96fdeed3,PodSandboxId:a9cb4f7ffa688293840a48f1e5c2cdae96e2559baa4bca8adf5fac4c03533e16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760800800253818268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-490392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e9166103d74230d28a3a8b1f7fbd94,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e9a403b-e099-407b-8674-287d0a16cb4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d79eaa9c2dd80       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   47ab06d933cd4       coredns-668d6bf9bc-jrt2c
	7877d88940dc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   6e6c8789017e0       storage-provisioner
	a65967e1971f7       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   2924589b43c4b       kube-proxy-t8sg5
	2199e5fae7506       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   257294646ee9e       kube-scheduler-test-preload-490392
	232f504cdaacb       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   2202eaa90a391       kube-controller-manager-test-preload-490392
	03aa0e565b087       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   d0d5a3ccacfce       etcd-test-preload-490392
	1e0df30ea0826       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   a9cb4f7ffa688       kube-apiserver-test-preload-490392
	
	
	==> coredns [d79eaa9c2dd80d5d4451c102734f0a0fb2954a80d9895def2f5534bd9d8df452] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60865 - 17939 "HINFO IN 5025662804454386431.2565142066949850909. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066555321s
	
	
	==> describe nodes <==
	Name:               test-preload-490392
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-490392
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=test-preload-490392
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_18_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:18:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-490392
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:20:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:20:04 +0000   Sat, 18 Oct 2025 15:18:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:20:04 +0000   Sat, 18 Oct 2025 15:18:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:20:04 +0000   Sat, 18 Oct 2025 15:18:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:20:04 +0000   Sat, 18 Oct 2025 15:20:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    test-preload-490392
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 eec9d1e8645e4634bc6141d52382d0ec
	  System UUID:                eec9d1e8-645e-4634-bc61-41d52382d0ec
	  Boot ID:                    2178a32a-8550-427b-97de-d067ca26d699
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-jrt2c                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-490392                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-test-preload-490392             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-490392    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-t8sg5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-test-preload-490392             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 101s                 kube-proxy       
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node test-preload-490392 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node test-preload-490392 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node test-preload-490392 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     109s                 kubelet          Node test-preload-490392 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  109s                 kubelet          Node test-preload-490392 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s                 kubelet          Node test-preload-490392 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 109s                 kubelet          Starting kubelet.
	  Normal   NodeReady                108s                 kubelet          Node test-preload-490392 status is now: NodeReady
	  Normal   RegisteredNode           105s                 node-controller  Node test-preload-490392 event: Registered Node test-preload-490392 in Controller
	  Normal   Starting                 23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-490392 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-490392 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-490392 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                  kubelet          Node test-preload-490392 has been rebooted, boot id: 2178a32a-8550-427b-97de-d067ca26d699
	  Normal   RegisteredNode           15s                  node-controller  Node test-preload-490392 event: Registered Node test-preload-490392 in Controller
	
	
	==> dmesg <==
	[Oct18 15:19] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000057] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007676] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.981485] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088956] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.098404] kauditd_printk_skb: 102 callbacks suppressed
	[Oct18 15:20] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.938261] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [03aa0e565b08769ce618285d407695fdc7ec71bbe94191abcf258e965fe8ac90] <==
	{"level":"info","ts":"2025-10-18T15:20:00.807948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","added-peer-id":"fe8c4457455e3a5","added-peer-peer-urls":["https://192.168.39.200:2380"]}
	{"level":"info","ts":"2025-10-18T15:20:00.808073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:20:00.808907Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:20:00.808859Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T15:20:00.810847Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T15:20:00.818925Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2025-10-18T15:20:00.818973Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2025-10-18T15:20:00.823888Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T15:20:00.824092Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T15:20:01.738139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T15:20:01.738229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T15:20:01.738317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2025-10-18T15:20:01.738335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T15:20:01.738341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2025-10-18T15:20:01.738349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T15:20:01.738356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2025-10-18T15:20:01.741680Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:test-preload-490392 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T15:20:01.741893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:20:01.742432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:20:01.742518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T15:20:01.743643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T15:20:01.742927Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T15:20:01.743957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T15:20:01.744300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T15:20:01.744604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	
	
	==> kernel <==
	 15:20:21 up 0 min,  0 users,  load average: 0.77, 0.22, 0.07
	Linux test-preload-490392 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1e0df30ea0826ab9aa577a8679f818c7829f9ae1708fee3d576170ca96fdeed3] <==
	I1018 15:20:03.124906       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:20:03.124912       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:20:03.124918       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:20:03.133739       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1018 15:20:03.135359       1 policy_source.go:240] refreshing policies
	I1018 15:20:03.146141       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:20:03.168641       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 15:20:03.169338       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:20:03.170368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:20:03.170506       1 shared_informer.go:320] Caches are synced for configmaps
	I1018 15:20:03.170590       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 15:20:03.170598       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 15:20:03.172146       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1018 15:20:03.180231       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:20:03.194406       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:20:03.218147       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1018 15:20:03.286101       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1018 15:20:03.974472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:20:04.515245       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1018 15:20:04.555236       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1018 15:20:04.589837       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:20:04.597630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:20:06.479498       1 controller.go:615] quota admission added evaluator for: endpoints
	I1018 15:20:06.582676       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1018 15:20:06.630834       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [232f504cdaacb717f45c979354a8906c092afd8b6d5cd83186db5e9c7969f9ae] <==
	I1018 15:20:06.275881       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1018 15:20:06.276081       1 shared_informer.go:320] Caches are synced for job
	I1018 15:20:06.280214       1 shared_informer.go:320] Caches are synced for node
	I1018 15:20:06.280315       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 15:20:06.280350       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 15:20:06.280354       1 shared_informer.go:320] Caches are synced for persistent volume
	I1018 15:20:06.280393       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 15:20:06.280360       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1018 15:20:06.280427       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1018 15:20:06.280535       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-490392"
	I1018 15:20:06.289512       1 shared_informer.go:320] Caches are synced for attach detach
	I1018 15:20:06.296711       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1018 15:20:06.296747       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1018 15:20:06.304140       1 shared_informer.go:320] Caches are synced for taint
	I1018 15:20:06.304477       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:20:06.304562       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-490392"
	I1018 15:20:06.304601       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 15:20:06.306667       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1018 15:20:06.308738       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 15:20:06.318903       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 15:20:06.591791       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="343.983564ms"
	I1018 15:20:06.591897       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.038µs"
	I1018 15:20:08.436083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.812µs"
	I1018 15:20:12.091472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.890218ms"
	I1018 15:20:12.094024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="107.149µs"
	
	
	==> kube-proxy [a65967e1971f7e4b76b0600a1fef6e3bf362646a19f83702f9a07c7cfe154b37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1018 15:20:03.873118       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1018 15:20:03.883940       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	E1018 15:20:03.884070       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:20:03.924104       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1018 15:20:03.924161       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 15:20:03.924207       1 server_linux.go:170] "Using iptables Proxier"
	I1018 15:20:03.927497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:20:03.928025       1 server.go:497] "Version info" version="v1.32.0"
	I1018 15:20:03.928397       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:20:03.931303       1 config.go:199] "Starting service config controller"
	I1018 15:20:03.931407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1018 15:20:03.931516       1 config.go:105] "Starting endpoint slice config controller"
	I1018 15:20:03.931537       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1018 15:20:03.933444       1 config.go:329] "Starting node config controller"
	I1018 15:20:03.933487       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1018 15:20:04.031634       1 shared_informer.go:320] Caches are synced for service config
	I1018 15:20:04.031670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1018 15:20:04.034689       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2199e5fae75060fab7157ce51f936b6dfdbeff68c8ebebe77c6112fb26a3c304] <==
	I1018 15:20:01.369363       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:20:03.056660       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:20:03.056730       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:20:03.056740       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:20:03.056751       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:20:03.117235       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1018 15:20:03.126381       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:20:03.134084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:20:03.134185       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 15:20:03.134302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1018 15:20:03.134380       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:20:03.234407       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.269197    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-490392\" already exists" pod="kube-system/kube-scheduler-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.269246    1162 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.276669    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/21a244e9-183d-4167-9115-a5c775e1b585-tmp\") pod \"storage-provisioner\" (UID: \"21a244e9-183d-4167-9115-a5c775e1b585\") " pod="kube-system/storage-provisioner"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.276740    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/291161a4-db2b-4319-b46a-f7161138422d-lib-modules\") pod \"kube-proxy-t8sg5\" (UID: \"291161a4-db2b-4319-b46a-f7161138422d\") " pod="kube-system/kube-proxy-t8sg5"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.276759    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/291161a4-db2b-4319-b46a-f7161138422d-xtables-lock\") pod \"kube-proxy-t8sg5\" (UID: \"291161a4-db2b-4319-b46a-f7161138422d\") " pod="kube-system/kube-proxy-t8sg5"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.277227    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.277346    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume podName:52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93 nodeName:}" failed. No retries permitted until 2025-10-18 15:20:03.777325341 +0000 UTC m=+5.676674817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume") pod "coredns-668d6bf9bc-jrt2c" (UID: "52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93") : object "kube-system"/"coredns" not registered
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.288054    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-490392\" already exists" pod="kube-system/etcd-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.370838    1162 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.370927    1162 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: I1018 15:20:03.371192    1162 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.392102    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-490392\" already exists" pod="kube-system/etcd-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.394865    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-490392\" already exists" pod="kube-system/kube-apiserver-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.396630    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-490392\" already exists" pod="kube-system/kube-scheduler-test-preload-490392"
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.781118    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 15:20:03 test-preload-490392 kubelet[1162]: E1018 15:20:03.781222    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume podName:52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93 nodeName:}" failed. No retries permitted until 2025-10-18 15:20:04.781198775 +0000 UTC m=+6.680548239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume") pod "coredns-668d6bf9bc-jrt2c" (UID: "52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93") : object "kube-system"/"coredns" not registered
	Oct 18 15:20:04 test-preload-490392 kubelet[1162]: E1018 15:20:04.789319    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 15:20:04 test-preload-490392 kubelet[1162]: E1018 15:20:04.789386    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume podName:52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93 nodeName:}" failed. No retries permitted until 2025-10-18 15:20:06.789372485 +0000 UTC m=+8.688721960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93-config-volume") pod "coredns-668d6bf9bc-jrt2c" (UID: "52ad6fcc-15b5-4ae9-8b68-1bbb14e09b93") : object "kube-system"/"coredns" not registered
	Oct 18 15:20:04 test-preload-490392 kubelet[1162]: I1018 15:20:04.851213    1162 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 18 15:20:08 test-preload-490392 kubelet[1162]: E1018 15:20:08.312636    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800808312114099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 15:20:08 test-preload-490392 kubelet[1162]: E1018 15:20:08.312669    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800808312114099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 15:20:09 test-preload-490392 kubelet[1162]: I1018 15:20:09.422875    1162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:20:12 test-preload-490392 kubelet[1162]: I1018 15:20:12.056794    1162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:20:18 test-preload-490392 kubelet[1162]: E1018 15:20:18.313990    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800818313508617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 15:20:18 test-preload-490392 kubelet[1162]: E1018 15:20:18.314012    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760800818313508617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7877d88940dc752bbf996c8c2b2b7b8317048b4412c326b3a08e5a61b7ae2c60] <==
	I1018 15:20:03.797221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-490392 -n test-preload-490392
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-490392 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-490392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-490392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-490392: (1.019155003s)
--- FAIL: TestPreload (164.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-153767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-153767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.151735569s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-153767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-153767" primary control-plane node in "pause-153767" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-153767" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:27:04.256281 1803906 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:27:04.256624 1803906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:04.256638 1803906 out.go:374] Setting ErrFile to fd 2...
	I1018 15:27:04.256643 1803906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:04.256964 1803906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:27:04.257586 1803906 out.go:368] Setting JSON to false
	I1018 15:27:04.258956 1803906 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25772,"bootTime":1760775452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:27:04.259097 1803906 start.go:141] virtualization: kvm guest
	I1018 15:27:04.261028 1803906 out.go:179] * [pause-153767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:27:04.262797 1803906 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:27:04.262800 1803906 notify.go:220] Checking for updates...
	I1018 15:27:04.265303 1803906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:27:04.266546 1803906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:27:04.267918 1803906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 15:27:04.269167 1803906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:27:04.270490 1803906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:27:04.272531 1803906 config.go:182] Loaded profile config "pause-153767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:04.273162 1803906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:04.273261 1803906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:04.292223 1803906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1018 15:27:04.292839 1803906 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:04.293561 1803906 main.go:141] libmachine: Using API Version  1
	I1018 15:27:04.293588 1803906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:04.294117 1803906 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:04.294354 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:04.294680 1803906 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:27:04.295156 1803906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:04.295214 1803906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:04.313648 1803906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
	I1018 15:27:04.314235 1803906 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:04.314926 1803906 main.go:141] libmachine: Using API Version  1
	I1018 15:27:04.314978 1803906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:04.315405 1803906 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:04.315584 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:04.352492 1803906 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 15:27:04.353810 1803906 start.go:305] selected driver: kvm2
	I1018 15:27:04.353828 1803906 start.go:925] validating driver "kvm2" against &{Name:pause-153767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-153767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:04.354004 1803906 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:27:04.354380 1803906 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:04.354470 1803906 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:04.374103 1803906 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:04.374150 1803906 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:04.394238 1803906 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:04.395412 1803906 cni.go:84] Creating CNI manager for ""
	I1018 15:27:04.395476 1803906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:04.395560 1803906 start.go:349] cluster config:
	{Name:pause-153767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-153767 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:04.395767 1803906 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:04.397604 1803906 out.go:179] * Starting "pause-153767" primary control-plane node in "pause-153767" cluster
	I1018 15:27:04.399177 1803906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:27:04.399236 1803906 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:27:04.399246 1803906 cache.go:58] Caching tarball of preloaded images
	I1018 15:27:04.399373 1803906 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:27:04.399384 1803906 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:27:04.399558 1803906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/config.json ...
	I1018 15:27:04.399844 1803906 start.go:360] acquireMachinesLock for pause-153767: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 15:27:04.399918 1803906 start.go:364] duration metric: took 47.206µs to acquireMachinesLock for "pause-153767"
	I1018 15:27:04.399950 1803906 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:27:04.399961 1803906 fix.go:54] fixHost starting: 
	I1018 15:27:04.400361 1803906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:04.400410 1803906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:04.416023 1803906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I1018 15:27:04.416707 1803906 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:04.417187 1803906 main.go:141] libmachine: Using API Version  1
	I1018 15:27:04.417225 1803906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:04.417727 1803906 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:04.417942 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:04.418123 1803906 main.go:141] libmachine: (pause-153767) Calling .GetState
	I1018 15:27:04.420455 1803906 fix.go:112] recreateIfNeeded on pause-153767: state=Running err=<nil>
	W1018 15:27:04.420524 1803906 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:27:04.422484 1803906 out.go:252] * Updating the running kvm2 "pause-153767" VM ...
	I1018 15:27:04.422519 1803906 machine.go:93] provisionDockerMachine start ...
	I1018 15:27:04.422540 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:04.422795 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:04.426777 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.427370 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.427406 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.427649 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:04.427904 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.428099 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.428286 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:04.428504 1803906 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:04.428918 1803906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 15:27:04.428936 1803906 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:27:04.562406 1803906 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-153767
	
	I1018 15:27:04.562443 1803906 main.go:141] libmachine: (pause-153767) Calling .GetMachineName
	I1018 15:27:04.562785 1803906 buildroot.go:166] provisioning hostname "pause-153767"
	I1018 15:27:04.562861 1803906 main.go:141] libmachine: (pause-153767) Calling .GetMachineName
	I1018 15:27:04.563121 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:04.567515 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.568059 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.568098 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.568312 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:04.568519 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.568713 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.568898 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:04.569068 1803906 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:04.569402 1803906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 15:27:04.569419 1803906 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-153767 && echo "pause-153767" | sudo tee /etc/hostname
	I1018 15:27:04.718860 1803906 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-153767
	
	I1018 15:27:04.718902 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:04.722376 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.722890 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.722924 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.723164 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:04.723405 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.723589 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.723743 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:04.723931 1803906 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:04.724237 1803906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 15:27:04.724263 1803906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-153767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-153767/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-153767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:27:04.856279 1803906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:27:04.856356 1803906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 15:27:04.856405 1803906 buildroot.go:174] setting up certificates
	I1018 15:27:04.856426 1803906 provision.go:84] configureAuth start
	I1018 15:27:04.856443 1803906 main.go:141] libmachine: (pause-153767) Calling .GetMachineName
	I1018 15:27:04.856747 1803906 main.go:141] libmachine: (pause-153767) Calling .GetIP
	I1018 15:27:04.860036 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.860502 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.860537 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.860734 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:04.864134 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.864558 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.864584 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.864910 1803906 provision.go:143] copyHostCerts
	I1018 15:27:04.865004 1803906 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem, removing ...
	I1018 15:27:04.865031 1803906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem
	I1018 15:27:04.865104 1803906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 15:27:04.865243 1803906 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem, removing ...
	I1018 15:27:04.865257 1803906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem
	I1018 15:27:04.865294 1803906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 15:27:04.865401 1803906 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem, removing ...
	I1018 15:27:04.865413 1803906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem
	I1018 15:27:04.865449 1803906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 15:27:04.865640 1803906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.pause-153767 san=[127.0.0.1 192.168.72.16 localhost minikube pause-153767]
	I1018 15:27:04.922564 1803906 provision.go:177] copyRemoteCerts
	I1018 15:27:04.922634 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:27:04.922665 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:04.925553 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.926045 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:04.926076 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:04.926286 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:04.926466 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:04.926585 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:04.926694 1803906 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/pause-153767/id_rsa Username:docker}
	I1018 15:27:05.026559 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:27:05.076998 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 15:27:05.117072 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:27:05.156581 1803906 provision.go:87] duration metric: took 300.134326ms to configureAuth
	I1018 15:27:05.156616 1803906 buildroot.go:189] setting minikube options for container-runtime
	I1018 15:27:05.156898 1803906 config.go:182] Loaded profile config "pause-153767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:05.157019 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:05.160434 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:05.160895 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:05.160927 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:05.161172 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:05.161429 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:05.161613 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:05.161781 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:05.161946 1803906 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:05.162272 1803906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 15:27:05.162292 1803906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:27:10.943949 1803906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:27:10.943985 1803906 machine.go:96] duration metric: took 6.521456497s to provisionDockerMachine
	I1018 15:27:10.944001 1803906 start.go:293] postStartSetup for "pause-153767" (driver="kvm2")
	I1018 15:27:10.944014 1803906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:27:10.944037 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:10.944490 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:27:10.944546 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:10.950231 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:10.950792 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:10.950821 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:10.951181 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:10.951456 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:10.951703 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:10.951918 1803906 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/pause-153767/id_rsa Username:docker}
	I1018 15:27:11.069871 1803906 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:27:11.078259 1803906 info.go:137] Remote host: Buildroot 2025.02
	I1018 15:27:11.078290 1803906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 15:27:11.078382 1803906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 15:27:11.078482 1803906 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem -> 17597922.pem in /etc/ssl/certs
	I1018 15:27:11.078597 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:27:11.100857 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:11.147110 1803906 start.go:296] duration metric: took 203.092845ms for postStartSetup
	I1018 15:27:11.147155 1803906 fix.go:56] duration metric: took 6.747192759s for fixHost
	I1018 15:27:11.147178 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:11.150402 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.150840 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:11.150888 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.151078 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:11.151317 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:11.151516 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:11.151736 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:11.151943 1803906 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:11.152142 1803906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 15:27:11.152152 1803906 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 15:27:11.287885 1803906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760801231.279122326
	
	I1018 15:27:11.287918 1803906 fix.go:216] guest clock: 1760801231.279122326
	I1018 15:27:11.287927 1803906 fix.go:229] Guest: 2025-10-18 15:27:11.279122326 +0000 UTC Remote: 2025-10-18 15:27:11.147159839 +0000 UTC m=+6.950990686 (delta=131.962487ms)
	I1018 15:27:11.287956 1803906 fix.go:200] guest clock delta is within tolerance: 131.962487ms
	I1018 15:27:11.287964 1803906 start.go:83] releasing machines lock for "pause-153767", held for 6.888033534s
	I1018 15:27:11.287989 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:11.288386 1803906 main.go:141] libmachine: (pause-153767) Calling .GetIP
	I1018 15:27:11.292253 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.359409 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:11.359448 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.360019 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:11.360956 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:11.361245 1803906 main.go:141] libmachine: (pause-153767) Calling .DriverName
	I1018 15:27:11.361431 1803906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:27:11.361488 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:11.361493 1803906 ssh_runner.go:195] Run: cat /version.json
	I1018 15:27:11.361519 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHHostname
	I1018 15:27:11.365729 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.365994 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.366203 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:11.366230 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.366419 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:11.366620 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:11.366623 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:11.366642 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:11.366833 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:11.366909 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHPort
	I1018 15:27:11.367106 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHKeyPath
	I1018 15:27:11.367105 1803906 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/pause-153767/id_rsa Username:docker}
	I1018 15:27:11.367233 1803906 main.go:141] libmachine: (pause-153767) Calling .GetSSHUsername
	I1018 15:27:11.367375 1803906 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/pause-153767/id_rsa Username:docker}
	I1018 15:27:11.461913 1803906 ssh_runner.go:195] Run: systemctl --version
	I1018 15:27:11.492617 1803906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:27:11.649996 1803906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:27:11.660203 1803906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:27:11.660273 1803906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:27:11.672511 1803906 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:27:11.672540 1803906 start.go:495] detecting cgroup driver to use...
	I1018 15:27:11.672638 1803906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:27:11.694335 1803906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:27:11.716011 1803906 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:27:11.716091 1803906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:27:11.740898 1803906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:27:11.762666 1803906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:27:12.041399 1803906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:27:12.272060 1803906 docker.go:234] disabling docker service ...
	I1018 15:27:12.272136 1803906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:27:12.307096 1803906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:27:12.326572 1803906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:27:12.552477 1803906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:27:12.988254 1803906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:27:13.056553 1803906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:27:13.178374 1803906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:27:13.178475 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.210958 1803906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 15:27:13.211053 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.233887 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.264134 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.286199 1803906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:27:13.320619 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.358235 1803906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.393880 1803906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:13.434457 1803906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:27:13.461162 1803906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:27:13.480648 1803906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:13.901075 1803906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:27:14.671368 1803906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:27:14.671463 1803906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:27:14.679451 1803906 start.go:563] Will wait 60s for crictl version
	I1018 15:27:14.679523 1803906 ssh_runner.go:195] Run: which crictl
	I1018 15:27:14.686799 1803906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 15:27:14.747545 1803906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 15:27:14.747655 1803906 ssh_runner.go:195] Run: crio --version
	I1018 15:27:14.797691 1803906 ssh_runner.go:195] Run: crio --version
	I1018 15:27:14.848473 1803906 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 15:27:14.849790 1803906 main.go:141] libmachine: (pause-153767) Calling .GetIP
	I1018 15:27:14.854541 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:14.855180 1803906 main.go:141] libmachine: (pause-153767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e4:c9", ip: ""} in network mk-pause-153767: {Iface:virbr4 ExpiryTime:2025-10-18 16:25:51 +0000 UTC Type:0 Mac:52:54:00:7f:e4:c9 Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-153767 Clientid:01:52:54:00:7f:e4:c9}
	I1018 15:27:14.855216 1803906 main.go:141] libmachine: (pause-153767) DBG | domain pause-153767 has defined IP address 192.168.72.16 and MAC address 52:54:00:7f:e4:c9 in network mk-pause-153767
	I1018 15:27:14.855687 1803906 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1018 15:27:14.862304 1803906 kubeadm.go:883] updating cluster {Name:pause-153767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-153767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:27:14.862532 1803906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:27:14.862603 1803906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:14.928038 1803906 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:27:14.928072 1803906 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:27:14.928137 1803906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:15.030317 1803906 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:27:15.030369 1803906 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:27:15.030379 1803906 kubeadm.go:934] updating node { 192.168.72.16 8443 v1.34.1 crio true true} ...
	I1018 15:27:15.030535 1803906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-153767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-153767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:27:15.030639 1803906 ssh_runner.go:195] Run: crio config
	I1018 15:27:15.189111 1803906 cni.go:84] Creating CNI manager for ""
	I1018 15:27:15.189143 1803906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:15.189176 1803906 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:27:15.189319 1803906 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.16 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-153767 NodeName:pause-153767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:27:15.189553 1803906 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-153767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.16"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.16"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:27:15.189696 1803906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:27:15.221052 1803906 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:27:15.221240 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:27:15.260559 1803906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 15:27:15.309128 1803906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:27:15.385458 1803906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 15:27:15.449080 1803906 ssh_runner.go:195] Run: grep 192.168.72.16	control-plane.minikube.internal$ /etc/hosts
	I1018 15:27:15.459138 1803906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:15.830275 1803906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:27:15.854159 1803906 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767 for IP: 192.168.72.16
	I1018 15:27:15.854192 1803906 certs.go:195] generating shared ca certs ...
	I1018 15:27:15.854215 1803906 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:15.854431 1803906 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 15:27:15.854494 1803906 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 15:27:15.854506 1803906 certs.go:257] generating profile certs ...
	I1018 15:27:15.854602 1803906 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/client.key
	I1018 15:27:15.854751 1803906 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/apiserver.key.3a259a0d
	I1018 15:27:15.854820 1803906 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/proxy-client.key
	I1018 15:27:15.854967 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem (1338 bytes)
	W1018 15:27:15.854999 1803906 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792_empty.pem, impossibly tiny 0 bytes
	I1018 15:27:15.855009 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 15:27:15.855031 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:27:15.855052 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:27:15.855077 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 15:27:15.855114 1803906 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:15.855971 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:27:15.907167 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:27:15.953008 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:27:15.998504 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 15:27:16.046436 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 15:27:16.088712 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:27:16.148601 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:27:16.206854 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:27:16.243478 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem --> /usr/share/ca-certificates/1759792.pem (1338 bytes)
	I1018 15:27:16.286669 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /usr/share/ca-certificates/17597922.pem (1708 bytes)
	I1018 15:27:16.332287 1803906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:27:16.380763 1803906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:27:16.411470 1803906 ssh_runner.go:195] Run: openssl version
	I1018 15:27:16.419538 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1759792.pem && ln -fs /usr/share/ca-certificates/1759792.pem /etc/ssl/certs/1759792.pem"
	I1018 15:27:16.439334 1803906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1759792.pem
	I1018 15:27:16.446095 1803906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:22 /usr/share/ca-certificates/1759792.pem
	I1018 15:27:16.446157 1803906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1759792.pem
	I1018 15:27:16.455316 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1759792.pem /etc/ssl/certs/51391683.0"
	I1018 15:27:16.469918 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17597922.pem && ln -fs /usr/share/ca-certificates/17597922.pem /etc/ssl/certs/17597922.pem"
	I1018 15:27:16.486927 1803906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17597922.pem
	I1018 15:27:16.493734 1803906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:22 /usr/share/ca-certificates/17597922.pem
	I1018 15:27:16.493824 1803906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17597922.pem
	I1018 15:27:16.505054 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17597922.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:27:16.523164 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:27:16.543148 1803906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:16.549653 1803906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:16.549738 1803906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:16.559223 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:27:16.577455 1803906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:27:16.585607 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:27:16.599596 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:27:16.613771 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:27:16.628112 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:27:16.642244 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:27:16.651411 1803906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:27:16.660592 1803906 kubeadm.go:400] StartCluster: {Name:pause-153767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-153767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:16.660777 1803906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:27:16.660915 1803906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:27:16.724230 1803906 cri.go:89] found id: "7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb"
	I1018 15:27:16.724264 1803906 cri.go:89] found id: "6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331"
	I1018 15:27:16.724270 1803906 cri.go:89] found id: "ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059"
	I1018 15:27:16.724274 1803906 cri.go:89] found id: "2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c"
	I1018 15:27:16.724278 1803906 cri.go:89] found id: "cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13"
	I1018 15:27:16.724283 1803906 cri.go:89] found id: "5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026"
	I1018 15:27:16.724287 1803906 cri.go:89] found id: ""
	I1018 15:27:16.724365 1803906 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-153767 -n pause-153767
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-153767 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-153767 logs -n 25: (1.747261724s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cert-options-155388                                                                                                                                             │ cert-options-155388       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p cert-expiration-486593 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-486593    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ delete  │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ ssh     │ force-systemd-flag-261740 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-261740 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ delete  │ -p force-systemd-flag-261740                                                                                                                                       │ force-systemd-flag-261740 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-607040 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-607040    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │                     │
	│ delete  │ -p running-upgrade-607040                                                                                                                                          │ running-upgrade-607040    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p pause-153767 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-153767              │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:27 UTC │
	│ ssh     │ -p NoKubernetes-479967 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │                     │
	│ stop    │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:25 UTC │
	│ start   │ -p NoKubernetes-479967 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:26 UTC │
	│ stop    │ -p kubernetes-upgrade-075048                                                                                                                                       │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:25 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:26 UTC │
	│ ssh     │ -p NoKubernetes-479967 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │                     │
	│ delete  │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:26 UTC │
	│ start   │ -p stopped-upgrade-646879 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-646879    │ jenkins │ v1.32.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │                     │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p pause-153767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-153767              │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ delete  │ -p kubernetes-upgrade-075048                                                                                                                                       │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p auto-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-320866               │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │                     │
	│ stop    │ stopped-upgrade-646879 stop                                                                                                                                        │ stopped-upgrade-646879    │ jenkins │ v1.32.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p stopped-upgrade-646879 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-646879    │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:27:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:27:19.547324 1804300 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:27:19.547664 1804300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:19.547676 1804300 out.go:374] Setting ErrFile to fd 2...
	I1018 15:27:19.547684 1804300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:19.547995 1804300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:27:19.548688 1804300 out.go:368] Setting JSON to false
	I1018 15:27:19.550121 1804300 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25788,"bootTime":1760775452,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:27:19.550273 1804300 start.go:141] virtualization: kvm guest
	I1018 15:27:19.554529 1804300 out.go:179] * [stopped-upgrade-646879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:27:19.556205 1804300 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:27:19.556198 1804300 notify.go:220] Checking for updates...
	I1018 15:27:19.558768 1804300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:27:19.560142 1804300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:27:19.561400 1804300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 15:27:19.562677 1804300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:27:19.564000 1804300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:27:19.565900 1804300 config.go:182] Loaded profile config "stopped-upgrade-646879": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 15:27:19.566411 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:19.566497 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:19.582456 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I1018 15:27:19.583073 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:19.583722 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:19.583747 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:19.584203 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:19.584443 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:19.586405 1804300 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 15:27:19.587833 1804300 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:27:19.588392 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:19.588466 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:19.604287 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1018 15:27:19.604824 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:19.605449 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:19.605482 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:19.606005 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:19.606244 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:19.646239 1804300 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 15:27:19.647414 1804300 start.go:305] selected driver: kvm2
	I1018 15:27:19.647435 1804300 start.go:925] validating driver "kvm2" against &{Name:stopped-upgrade-646879 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-646
879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:19.647580 1804300 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:27:19.648657 1804300 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:19.648758 1804300 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:19.668378 1804300 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:19.668414 1804300 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:19.684672 1804300 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:19.685244 1804300 cni.go:84] Creating CNI manager for ""
	I1018 15:27:19.685317 1804300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:19.685399 1804300 start.go:349] cluster config:
	{Name:stopped-upgrade-646879 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-646879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:19.685524 1804300 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:19.687333 1804300 out.go:179] * Starting "stopped-upgrade-646879" primary control-plane node in "stopped-upgrade-646879" cluster
	I1018 15:27:17.280323 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:17.281221 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:17.281267 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:17.281611 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:17.281638 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:17.281592 1804118 retry.go:31] will retry after 914.464617ms: waiting for domain to come up
	I1018 15:27:18.197975 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:18.198641 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:18.198670 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:18.199118 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:18.199165 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:18.199066 1804118 retry.go:31] will retry after 1.001827107s: waiting for domain to come up
	I1018 15:27:19.202905 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:19.203521 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:19.203550 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:19.203867 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:19.203920 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:19.203849 1804118 retry.go:31] will retry after 1.834659839s: waiting for domain to come up
	I1018 15:27:21.041079 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:21.041933 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:21.041958 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:21.042315 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:21.042352 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:21.042300 1804118 retry.go:31] will retry after 1.711084821s: waiting for domain to come up
	I1018 15:27:19.546327 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:19.638468 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:19.730744 1803906 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:27:19.730858 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.231671 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.731033 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.763587 1803906 api_server.go:72] duration metric: took 1.032858692s to wait for apiserver process to appear ...
	I1018 15:27:20.763622 1803906 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:27:20.763648 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.537090 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:27:23.537133 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:27:23.537155 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.587002 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:27:23.587038 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:27:23.764445 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.781482 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:23.781527 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:19.689591 1804300 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 15:27:19.689654 1804300 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1018 15:27:19.689667 1804300 cache.go:58] Caching tarball of preloaded images
	I1018 15:27:19.689824 1804300 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:27:19.689850 1804300 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1018 15:27:19.690022 1804300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/stopped-upgrade-646879/config.json ...
	I1018 15:27:19.690330 1804300 start.go:360] acquireMachinesLock for stopped-upgrade-646879: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 15:27:24.264415 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:24.273237 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:24.273279 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:24.764543 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:24.771820 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:24.771861 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:25.264577 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:25.271163 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 15:27:25.279606 1803906 api_server.go:141] control plane version: v1.34.1
	I1018 15:27:25.279637 1803906 api_server.go:131] duration metric: took 4.51600683s to wait for apiserver health ...
	I1018 15:27:25.279647 1803906 cni.go:84] Creating CNI manager for ""
	I1018 15:27:25.279654 1803906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:25.281654 1803906 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 15:27:25.283203 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 15:27:25.300424 1803906 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 15:27:25.337641 1803906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:27:25.344935 1803906 system_pods.go:59] 6 kube-system pods found
	I1018 15:27:25.344985 1803906 system_pods.go:61] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:25.344996 1803906 system_pods.go:61] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:25.345010 1803906 system_pods.go:61] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:25.345020 1803906 system_pods.go:61] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:25.345026 1803906 system_pods.go:61] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:25.345034 1803906 system_pods.go:61] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:25.345042 1803906 system_pods.go:74] duration metric: took 7.36436ms to wait for pod list to return data ...
	I1018 15:27:25.345052 1803906 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:27:25.351733 1803906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:27:25.351777 1803906 node_conditions.go:123] node cpu capacity is 2
	I1018 15:27:25.351798 1803906 node_conditions.go:105] duration metric: took 6.739141ms to run NodePressure ...
	I1018 15:27:25.351872 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:25.851684 1803906 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 15:27:25.855882 1803906 kubeadm.go:743] kubelet initialised
	I1018 15:27:25.855910 1803906 kubeadm.go:744] duration metric: took 4.196459ms waiting for restarted kubelet to initialise ...
	I1018 15:27:25.855932 1803906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:27:25.876350 1803906 ops.go:34] apiserver oom_adj: -16
	I1018 15:27:25.876383 1803906 kubeadm.go:601] duration metric: took 9.048616961s to restartPrimaryControlPlane
	I1018 15:27:25.876399 1803906 kubeadm.go:402] duration metric: took 9.215824328s to StartCluster
	I1018 15:27:25.876426 1803906 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:25.876549 1803906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:27:25.877493 1803906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:25.877779 1803906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:27:25.877892 1803906 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:27:25.878047 1803906 config.go:182] Loaded profile config "pause-153767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:25.879466 1803906 out.go:179] * Verifying Kubernetes components...
	I1018 15:27:25.880271 1803906 out.go:179] * Enabled addons: 
	I1018 15:27:22.755016 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:22.755877 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:22.755909 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:22.756221 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:22.756287 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:22.756222 1804118 retry.go:31] will retry after 1.995548146s: waiting for domain to come up
	I1018 15:27:24.753971 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:24.754798 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:24.754840 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:24.755298 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:24.755326 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:24.755255 1804118 retry.go:31] will retry after 2.879345962s: waiting for domain to come up
	I1018 15:27:25.881132 1803906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:25.881721 1803906 addons.go:514] duration metric: took 3.843381ms for enable addons: enabled=[]
	I1018 15:27:26.100881 1803906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:27:26.129496 1803906 node_ready.go:35] waiting up to 6m0s for node "pause-153767" to be "Ready" ...
	I1018 15:27:26.132758 1803906 node_ready.go:49] node "pause-153767" is "Ready"
	I1018 15:27:26.132803 1803906 node_ready.go:38] duration metric: took 3.249227ms for node "pause-153767" to be "Ready" ...
	I1018 15:27:26.132822 1803906 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:27:26.132876 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:26.154485 1803906 api_server.go:72] duration metric: took 276.662725ms to wait for apiserver process to appear ...
	I1018 15:27:26.154522 1803906 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:27:26.154548 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:26.161194 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 15:27:26.162338 1803906 api_server.go:141] control plane version: v1.34.1
	I1018 15:27:26.162380 1803906 api_server.go:131] duration metric: took 7.848496ms to wait for apiserver health ...
	I1018 15:27:26.162391 1803906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:27:26.172265 1803906 system_pods.go:59] 6 kube-system pods found
	I1018 15:27:26.172308 1803906 system_pods.go:61] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:26.172319 1803906 system_pods.go:61] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:26.172329 1803906 system_pods.go:61] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:26.172338 1803906 system_pods.go:61] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:26.172363 1803906 system_pods.go:61] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:26.172372 1803906 system_pods.go:61] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:26.172385 1803906 system_pods.go:74] duration metric: took 9.985111ms to wait for pod list to return data ...
	I1018 15:27:26.172399 1803906 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:27:26.181955 1803906 default_sa.go:45] found service account: "default"
	I1018 15:27:26.181984 1803906 default_sa.go:55] duration metric: took 9.575988ms for default service account to be created ...
	I1018 15:27:26.181993 1803906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:27:26.187464 1803906 system_pods.go:86] 6 kube-system pods found
	I1018 15:27:26.187498 1803906 system_pods.go:89] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:26.187506 1803906 system_pods.go:89] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:26.187513 1803906 system_pods.go:89] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:26.187521 1803906 system_pods.go:89] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:26.187529 1803906 system_pods.go:89] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:26.187539 1803906 system_pods.go:89] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:26.187551 1803906 system_pods.go:126] duration metric: took 5.551512ms to wait for k8s-apps to be running ...
	I1018 15:27:26.187568 1803906 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:27:26.187642 1803906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:27:26.210844 1803906 system_svc.go:56] duration metric: took 23.26473ms WaitForService to wait for kubelet
	I1018 15:27:26.210880 1803906 kubeadm.go:586] duration metric: took 333.065807ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:27:26.210900 1803906 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:27:26.215816 1803906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:27:26.215849 1803906 node_conditions.go:123] node cpu capacity is 2
	I1018 15:27:26.215872 1803906 node_conditions.go:105] duration metric: took 4.958387ms to run NodePressure ...
	I1018 15:27:26.215888 1803906 start.go:241] waiting for startup goroutines ...
	I1018 15:27:26.215898 1803906 start.go:246] waiting for cluster config update ...
	I1018 15:27:26.215913 1803906 start.go:255] writing updated cluster config ...
	I1018 15:27:26.216232 1803906 ssh_runner.go:195] Run: rm -f paused
	I1018 15:27:26.225625 1803906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:27:26.226318 1803906 kapi.go:59] client config for pause-153767: &rest.Config{Host:"https://192.168.72.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:27:26.230664 1803906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2ztp2" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:27:28.237299 1803906 pod_ready.go:104] pod "coredns-66bc5c9577-2ztp2" is not "Ready", error: <nil>
	I1018 15:27:28.743114 1803906 pod_ready.go:94] pod "coredns-66bc5c9577-2ztp2" is "Ready"
	I1018 15:27:28.743147 1803906 pod_ready.go:86] duration metric: took 2.512446816s for pod "coredns-66bc5c9577-2ztp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:28.747723 1803906 pod_ready.go:83] waiting for pod "etcd-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:27.636808 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:27.637444 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:27.637475 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:27.637791 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:27.637843 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:27.637784 1804118 retry.go:31] will retry after 3.111244006s: waiting for domain to come up
	I1018 15:27:30.752642 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.753415 1804089 main.go:141] libmachine: (auto-320866) found domain IP: 192.168.39.149
	I1018 15:27:30.753449 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has current primary IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.753458 1804089 main.go:141] libmachine: (auto-320866) reserving static IP address...
	I1018 15:27:30.753949 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find host DHCP lease matching {name: "auto-320866", mac: "52:54:00:f3:b9:cb", ip: "192.168.39.149"} in network mk-auto-320866
	I1018 15:27:30.966376 1804089 main.go:141] libmachine: (auto-320866) DBG | Getting to WaitForSSH function...
	I1018 15:27:30.966411 1804089 main.go:141] libmachine: (auto-320866) reserved static IP address 192.168.39.149 for domain auto-320866
	I1018 15:27:30.966424 1804089 main.go:141] libmachine: (auto-320866) waiting for SSH...
	I1018 15:27:30.969972 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.970532 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:30.970582 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.970746 1804089 main.go:141] libmachine: (auto-320866) DBG | Using SSH client type: external
	I1018 15:27:30.970775 1804089 main.go:141] libmachine: (auto-320866) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa (-rw-------)
	I1018 15:27:30.970847 1804089 main.go:141] libmachine: (auto-320866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:27:30.970874 1804089 main.go:141] libmachine: (auto-320866) DBG | About to run SSH command:
	I1018 15:27:30.970899 1804089 main.go:141] libmachine: (auto-320866) DBG | exit 0
	I1018 15:27:31.106387 1804089 main.go:141] libmachine: (auto-320866) DBG | SSH cmd err, output: <nil>: 
	I1018 15:27:31.106790 1804089 main.go:141] libmachine: (auto-320866) domain creation complete
	I1018 15:27:31.107264 1804089 main.go:141] libmachine: (auto-320866) Calling .GetConfigRaw
	I1018 15:27:31.108119 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:31.108375 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:31.108566 1804089 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 15:27:31.108579 1804089 main.go:141] libmachine: (auto-320866) Calling .GetState
	I1018 15:27:31.110102 1804089 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 15:27:31.110115 1804089 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 15:27:31.110120 1804089 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 15:27:31.110125 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.113024 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.113411 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.113450 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.113649 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.113837 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.113984 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.114133 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.114287 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.114619 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.114633 1804089 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 15:27:31.228980 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:27:31.229009 1804089 main.go:141] libmachine: Detecting the provisioner...
	I1018 15:27:31.229016 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.232831 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.233245 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.233268 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.233523 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.233804 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.234031 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.234183 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.234404 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.234640 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.234683 1804089 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 15:27:31.349841 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 15:27:31.349915 1804089 main.go:141] libmachine: found compatible host: buildroot
	I1018 15:27:31.349922 1804089 main.go:141] libmachine: Provisioning with buildroot...
	I1018 15:27:31.349930 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.350219 1804089 buildroot.go:166] provisioning hostname "auto-320866"
	I1018 15:27:31.350260 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.350522 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.353648 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.354107 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.354130 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.354422 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.354633 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.354796 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.354978 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.355179 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.355467 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.355482 1804089 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-320866 && echo "auto-320866" | sudo tee /etc/hostname
	I1018 15:27:31.487801 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-320866
	
	I1018 15:27:31.487834 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.491465 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.491885 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.491908 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.492181 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.492414 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.492594 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.492750 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.492930 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.493131 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.493146 1804089 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-320866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-320866/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-320866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:27:31.637675 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:27:31.637712 1804089 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 15:27:31.637781 1804089 buildroot.go:174] setting up certificates
	I1018 15:27:31.637794 1804089 provision.go:84] configureAuth start
	I1018 15:27:31.637811 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.638187 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:31.641456 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.641856 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.641883 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.642072 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.644886 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.645303 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.645332 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.645496 1804089 provision.go:143] copyHostCerts
	I1018 15:27:31.645562 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem, removing ...
	I1018 15:27:31.645587 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem
	I1018 15:27:31.645703 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 15:27:31.645856 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem, removing ...
	I1018 15:27:31.645868 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem
	I1018 15:27:31.645914 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 15:27:31.646018 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem, removing ...
	I1018 15:27:31.646029 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem
	I1018 15:27:31.646067 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 15:27:31.646148 1804089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.auto-320866 san=[127.0.0.1 192.168.39.149 auto-320866 localhost minikube]
	I1018 15:27:32.880890 1804300 start.go:364] duration metric: took 13.190435497s to acquireMachinesLock for "stopped-upgrade-646879"
	I1018 15:27:32.880940 1804300 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:27:32.880949 1804300 fix.go:54] fixHost starting: 
	I1018 15:27:32.881400 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:32.881459 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:32.899795 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I1018 15:27:32.900486 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:32.901136 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:32.901164 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:32.901687 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:32.901960 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:32.902156 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .GetState
	I1018 15:27:32.904421 1804300 fix.go:112] recreateIfNeeded on stopped-upgrade-646879: state=Stopped err=<nil>
	I1018 15:27:32.904471 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	W1018 15:27:32.904659 1804300 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 15:27:30.756176 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	W1018 15:27:33.254387 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	I1018 15:27:32.907482 1804300 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-646879" ...
	I1018 15:27:32.907524 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .Start
	I1018 15:27:32.907738 1804300 main.go:141] libmachine: (stopped-upgrade-646879) starting domain...
	I1018 15:27:32.907760 1804300 main.go:141] libmachine: (stopped-upgrade-646879) ensuring networks are active...
	I1018 15:27:32.908660 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Ensuring network default is active
	I1018 15:27:32.909215 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Ensuring network mk-stopped-upgrade-646879 is active
	I1018 15:27:32.909825 1804300 main.go:141] libmachine: (stopped-upgrade-646879) getting domain XML...
	I1018 15:27:32.911026 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | starting domain XML:
	I1018 15:27:32.911051 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | <domain type='kvm'>
	I1018 15:27:32.911063 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <name>stopped-upgrade-646879</name>
	I1018 15:27:32.911077 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <uuid>47eaba17-7efa-45cc-afb5-18fbd25cf505</uuid>
	I1018 15:27:32.911087 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 15:27:32.911099 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 15:27:32.911108 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 15:27:32.911118 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <os>
	I1018 15:27:32.911143 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 15:27:32.911158 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <boot dev='cdrom'/>
	I1018 15:27:32.911170 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <boot dev='hd'/>
	I1018 15:27:32.911177 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <bootmenu enable='no'/>
	I1018 15:27:32.911189 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </os>
	I1018 15:27:32.911199 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <features>
	I1018 15:27:32.911216 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <acpi/>
	I1018 15:27:32.911228 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <apic/>
	I1018 15:27:32.911257 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <pae/>
	I1018 15:27:32.911281 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </features>
	I1018 15:27:32.911301 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 15:27:32.911312 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <clock offset='utc'/>
	I1018 15:27:32.911323 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 15:27:32.911352 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_reboot>restart</on_reboot>
	I1018 15:27:32.911366 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_crash>destroy</on_crash>
	I1018 15:27:32.911373 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <devices>
	I1018 15:27:32.911382 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 15:27:32.911389 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <disk type='file' device='cdrom'>
	I1018 15:27:32.911404 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <driver name='qemu' type='raw'/>
	I1018 15:27:32.911419 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/boot2docker.iso'/>
	I1018 15:27:32.911433 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 15:27:32.911441 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <readonly/>
	I1018 15:27:32.911452 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 15:27:32.911462 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </disk>
	I1018 15:27:32.911471 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <disk type='file' device='disk'>
	I1018 15:27:32.911479 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 15:27:32.911491 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/stopped-upgrade-646879.rawdisk'/>
	I1018 15:27:32.911499 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target dev='hda' bus='virtio'/>
	I1018 15:27:32.911509 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 15:27:32.911530 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </disk>
	I1018 15:27:32.911544 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 15:27:32.911557 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 15:27:32.911565 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </controller>
	I1018 15:27:32.911578 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 15:27:32.911597 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 15:27:32.911606 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 15:27:32.911614 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </controller>
	I1018 15:27:32.911621 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <interface type='network'>
	I1018 15:27:32.911629 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <mac address='52:54:00:c3:85:7d'/>
	I1018 15:27:32.911637 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source network='mk-stopped-upgrade-646879'/>
	I1018 15:27:32.911676 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <model type='virtio'/>
	I1018 15:27:32.911713 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 15:27:32.911729 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </interface>
	I1018 15:27:32.911737 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <interface type='network'>
	I1018 15:27:32.911767 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <mac address='52:54:00:87:ba:01'/>
	I1018 15:27:32.911780 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source network='default'/>
	I1018 15:27:32.911803 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <model type='virtio'/>
	I1018 15:27:32.911827 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 15:27:32.911840 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </interface>
	I1018 15:27:32.911850 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <serial type='pty'>
	I1018 15:27:32.911860 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target type='isa-serial' port='0'>
	I1018 15:27:32.911870 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |         <model name='isa-serial'/>
	I1018 15:27:32.911879 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       </target>
	I1018 15:27:32.911887 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </serial>
	I1018 15:27:32.911896 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <console type='pty'>
	I1018 15:27:32.911919 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target type='serial' port='0'/>
	I1018 15:27:32.911927 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </console>
	I1018 15:27:32.911935 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <input type='mouse' bus='ps2'/>
	I1018 15:27:32.911944 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 15:27:32.911951 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <audio id='1' type='none'/>
	I1018 15:27:32.911960 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <memballoon model='virtio'>
	I1018 15:27:32.911970 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 15:27:32.911978 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </memballoon>
	I1018 15:27:32.911990 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <rng model='virtio'>
	I1018 15:27:32.911999 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <backend model='random'>/dev/random</backend>
	I1018 15:27:32.912009 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 15:27:32.912031 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </rng>
	I1018 15:27:32.912042 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </devices>
	I1018 15:27:32.912076 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | </domain>
	I1018 15:27:32.912095 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | 
	I1018 15:27:34.446271 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for domain to start...
	I1018 15:27:34.447699 1804300 main.go:141] libmachine: (stopped-upgrade-646879) domain is now running
	I1018 15:27:34.447742 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for IP...
	I1018 15:27:34.448706 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.449295 1804300 main.go:141] libmachine: (stopped-upgrade-646879) found domain IP: 192.168.50.247
	I1018 15:27:34.449319 1804300 main.go:141] libmachine: (stopped-upgrade-646879) reserving static IP address...
	I1018 15:27:34.449372 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has current primary IP address 192.168.50.247 and MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.449962 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | found host DHCP lease matching {name: "stopped-upgrade-646879", mac: "52:54:00:c3:85:7d", ip: "192.168.50.247"} in network mk-stopped-upgrade-646879: {Iface:virbr3 ExpiryTime:2025-10-18 16:26:48 +0000 UTC Type:0 Mac:52:54:00:c3:85:7d Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:stopped-upgrade-646879 Clientid:01:52:54:00:c3:85:7d}
	I1018 15:27:34.449997 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | skip adding static IP to network mk-stopped-upgrade-646879 - found existing host DHCP lease matching {name: "stopped-upgrade-646879", mac: "52:54:00:c3:85:7d", ip: "192.168.50.247"}
	I1018 15:27:34.450027 1804300 main.go:141] libmachine: (stopped-upgrade-646879) reserved static IP address 192.168.50.247 for domain stopped-upgrade-646879
	I1018 15:27:34.450044 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for SSH...
	I1018 15:27:34.450056 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Getting to WaitForSSH function...
	I1018 15:27:34.453369 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.453879 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:85:7d", ip: ""} in network mk-stopped-upgrade-646879: {Iface:virbr3 ExpiryTime:2025-10-18 16:26:48 +0000 UTC Type:0 Mac:52:54:00:c3:85:7d Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:stopped-upgrade-646879 Clientid:01:52:54:00:c3:85:7d}
	I1018 15:27:34.453919 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined IP address 192.168.50.247 and MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.454173 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Using SSH client type: external
	I1018 15:27:34.454220 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/id_rsa (-rw-------)
	I1018 15:27:34.454268 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:27:34.454285 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | About to run SSH command:
	I1018 15:27:34.454304 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | exit 0
	I1018 15:27:32.141682 1804089 provision.go:177] copyRemoteCerts
	I1018 15:27:32.141765 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:27:32.141796 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.144895 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.145322 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.145370 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.145593 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.145881 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.146079 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.146286 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.234398 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:27:32.268808 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 15:27:32.302538 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:27:32.336859 1804089 provision.go:87] duration metric: took 699.045397ms to configureAuth
	I1018 15:27:32.336889 1804089 buildroot.go:189] setting minikube options for container-runtime
	I1018 15:27:32.337109 1804089 config.go:182] Loaded profile config "auto-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:32.337220 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.340814 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.341273 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.341303 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.341592 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.341818 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.342062 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.342214 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.342492 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:32.342724 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:32.342750 1804089 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:27:32.605236 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:27:32.605269 1804089 main.go:141] libmachine: Checking connection to Docker...
	I1018 15:27:32.605279 1804089 main.go:141] libmachine: (auto-320866) Calling .GetURL
	I1018 15:27:32.606655 1804089 main.go:141] libmachine: (auto-320866) DBG | using libvirt version 8000000
	I1018 15:27:32.609760 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.610201 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.610228 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.610462 1804089 main.go:141] libmachine: Docker is up and running!
	I1018 15:27:32.610480 1804089 main.go:141] libmachine: Reticulating splines...
	I1018 15:27:32.610488 1804089 client.go:171] duration metric: took 20.372216688s to LocalClient.Create
	I1018 15:27:32.610515 1804089 start.go:167] duration metric: took 20.372288901s to libmachine.API.Create "auto-320866"
	I1018 15:27:32.610526 1804089 start.go:293] postStartSetup for "auto-320866" (driver="kvm2")
	I1018 15:27:32.610536 1804089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:27:32.610560 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.610860 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:27:32.610901 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.613716 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.614091 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.614120 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.614356 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.614578 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.614754 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.614935 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.702477 1804089 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:27:32.708272 1804089 info.go:137] Remote host: Buildroot 2025.02
	I1018 15:27:32.708315 1804089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 15:27:32.708401 1804089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 15:27:32.708477 1804089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem -> 17597922.pem in /etc/ssl/certs
	I1018 15:27:32.708620 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:27:32.721109 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:32.754337 1804089 start.go:296] duration metric: took 143.793935ms for postStartSetup
	I1018 15:27:32.754440 1804089 main.go:141] libmachine: (auto-320866) Calling .GetConfigRaw
	I1018 15:27:32.755367 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:32.758406 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.758785 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.758818 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.759074 1804089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/config.json ...
	I1018 15:27:32.759331 1804089 start.go:128] duration metric: took 20.542032797s to createHost
	I1018 15:27:32.759376 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.762139 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.762524 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.762544 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.762771 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.762979 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.763137 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.763286 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.763490 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:32.763775 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:32.763791 1804089 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 15:27:32.880669 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760801252.836292185
	
	I1018 15:27:32.880696 1804089 fix.go:216] guest clock: 1760801252.836292185
	I1018 15:27:32.880703 1804089 fix.go:229] Guest: 2025-10-18 15:27:32.836292185 +0000 UTC Remote: 2025-10-18 15:27:32.759360109 +0000 UTC m=+20.695537015 (delta=76.932076ms)
	I1018 15:27:32.880726 1804089 fix.go:200] guest clock delta is within tolerance: 76.932076ms
	I1018 15:27:32.880731 1804089 start.go:83] releasing machines lock for "auto-320866", held for 20.663540336s
	I1018 15:27:32.880760 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.881153 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:32.884771 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.885266 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.885299 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.885588 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886294 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886529 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886645 1804089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:27:32.886727 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.886801 1804089 ssh_runner.go:195] Run: cat /version.json
	I1018 15:27:32.886831 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.891052 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892252 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.892282 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892308 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892594 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.892840 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.892869 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.892896 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.893157 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.893161 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.893360 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.893349 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.893522 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.893711 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:33.003446 1804089 ssh_runner.go:195] Run: systemctl --version
	I1018 15:27:33.010304 1804089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:27:33.171421 1804089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:27:33.181773 1804089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:27:33.181884 1804089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:27:33.205245 1804089 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:27:33.205273 1804089 start.go:495] detecting cgroup driver to use...
	I1018 15:27:33.205373 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:27:33.226108 1804089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:27:33.246616 1804089 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:27:33.246715 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:27:33.269113 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:27:33.289669 1804089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:27:33.455092 1804089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:27:33.687419 1804089 docker.go:234] disabling docker service ...
	I1018 15:27:33.687507 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:27:33.705948 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:27:33.729314 1804089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:27:33.912416 1804089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:27:34.076011 1804089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:27:34.094033 1804089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:27:34.123890 1804089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:27:34.123952 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.140864 1804089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 15:27:34.140956 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.156274 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.173832 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.188468 1804089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:27:34.205042 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.219960 1804089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.246507 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.262043 1804089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:27:34.274149 1804089 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 15:27:34.274213 1804089 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 15:27:34.301602 1804089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:27:34.322941 1804089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:34.495220 1804089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:27:34.613259 1804089 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:27:34.613366 1804089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:27:34.620365 1804089 start.go:563] Will wait 60s for crictl version
	I1018 15:27:34.620448 1804089 ssh_runner.go:195] Run: which crictl
	I1018 15:27:34.625474 1804089 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 15:27:34.671923 1804089 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 15:27:34.672021 1804089 ssh_runner.go:195] Run: crio --version
	I1018 15:27:34.705835 1804089 ssh_runner.go:195] Run: crio --version
	I1018 15:27:34.742448 1804089 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 15:27:34.743772 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:34.747306 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:34.747780 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:34.747808 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:34.748223 1804089 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 15:27:34.754278 1804089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:27:34.772633 1804089 kubeadm.go:883] updating cluster {Name:auto-320866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:27:34.772779 1804089 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:27:34.772852 1804089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:34.817485 1804089 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 15:27:34.817578 1804089 ssh_runner.go:195] Run: which lz4
	I1018 15:27:34.822795 1804089 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 15:27:34.829171 1804089 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 15:27:34.829207 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 15:27:36.739614 1804089 crio.go:462] duration metric: took 1.916873546s to copy over tarball
	I1018 15:27:36.739706 1804089 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W1018 15:27:35.256007 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	W1018 15:27:37.755592 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	I1018 15:27:38.490838 1804089 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.751094877s)
	I1018 15:27:38.490879 1804089 crio.go:469] duration metric: took 1.751221412s to extract the tarball
	I1018 15:27:38.490892 1804089 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 15:27:38.535563 1804089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:38.585531 1804089 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:27:38.585579 1804089 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:27:38.585590 1804089 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.34.1 crio true true} ...
	I1018 15:27:38.585752 1804089 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-320866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:27:38.585888 1804089 ssh_runner.go:195] Run: crio config
	I1018 15:27:38.649069 1804089 cni.go:84] Creating CNI manager for ""
	I1018 15:27:38.649097 1804089 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:38.649117 1804089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:27:38.649140 1804089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-320866 NodeName:auto-320866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:27:38.649262 1804089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-320866"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.149"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:27:38.649360 1804089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:27:38.664057 1804089 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:27:38.664147 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:27:38.681451 1804089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 15:27:38.712362 1804089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:27:38.742102 1804089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 15:27:38.768948 1804089 ssh_runner.go:195] Run: grep 192.168.39.149	control-plane.minikube.internal$ /etc/hosts
	I1018 15:27:38.775124 1804089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:27:38.797532 1804089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:38.964464 1804089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:27:39.013851 1804089 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866 for IP: 192.168.39.149
	I1018 15:27:39.013891 1804089 certs.go:195] generating shared ca certs ...
	I1018 15:27:39.013916 1804089 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.014130 1804089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 15:27:39.014194 1804089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 15:27:39.014208 1804089 certs.go:257] generating profile certs ...
	I1018 15:27:39.014282 1804089 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key
	I1018 15:27:39.014303 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt with IP's: []
	I1018 15:27:39.200100 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt ...
	I1018 15:27:39.200138 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: {Name:mka7d440c592c7c10bc0b3c3bb53a1b06d125246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.200390 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key ...
	I1018 15:27:39.200410 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key: {Name:mkc87a82c0f3aa5dc9da51162b0d987c2c458895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.200546 1804089 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0
	I1018 15:27:39.200570 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149]
	I1018 15:27:39.392258 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 ...
	I1018 15:27:39.392290 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0: {Name:mk08b05c19f618281ce00fe2e4927159dcb4b2d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.392481 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0 ...
	I1018 15:27:39.392495 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0: {Name:mk36ce21393ce9885e30fa3f6b117483c2f44248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.392571 1804089 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt
	I1018 15:27:39.392670 1804089 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0 -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key
	I1018 15:27:39.392725 1804089 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key
	I1018 15:27:39.392740 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt with IP's: []
	I1018 15:27:39.447183 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt ...
	I1018 15:27:39.447216 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt: {Name:mk6680622925c699f8a2a2271a91a1a7fede3aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.447398 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key ...
	I1018 15:27:39.447410 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key: {Name:mk3bb3acfcab493ec7cdf1e8c83831c98160e0a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.447589 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem (1338 bytes)
	W1018 15:27:39.447624 1804089 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792_empty.pem, impossibly tiny 0 bytes
	I1018 15:27:39.447636 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 15:27:39.447656 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:27:39.447677 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:27:39.447701 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 15:27:39.447748 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:39.448413 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:27:39.482647 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:27:39.522855 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:27:39.557777 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 15:27:39.592579 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 15:27:39.626891 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:27:39.660802 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:27:39.700382 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:27:39.744034 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:27:39.790607 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem --> /usr/share/ca-certificates/1759792.pem (1338 bytes)
	I1018 15:27:39.824608 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /usr/share/ca-certificates/17597922.pem (1708 bytes)
	I1018 15:27:39.859123 1804089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:27:39.886795 1804089 ssh_runner.go:195] Run: openssl version
	I1018 15:27:39.894400 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:27:39.910131 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.916370 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.916473 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.924990 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:27:39.939836 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1759792.pem && ln -fs /usr/share/ca-certificates/1759792.pem /etc/ssl/certs/1759792.pem"
	I1018 15:27:39.959201 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.965952 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:22 /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.966032 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.973932 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1759792.pem /etc/ssl/certs/51391683.0"
	I1018 15:27:39.988893 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17597922.pem && ln -fs /usr/share/ca-certificates/17597922.pem /etc/ssl/certs/17597922.pem"
	I1018 15:27:40.004371 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.010613 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:22 /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.010678 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.020729 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17597922.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:27:40.036651 1804089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:27:40.042254 1804089 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:27:40.042336 1804089 kubeadm.go:400] StartCluster: {Name:auto-320866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:40.042439 1804089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:27:40.042493 1804089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:27:40.090565 1804089 cri.go:89] found id: ""
	I1018 15:27:40.090659 1804089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:27:40.104405 1804089 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:27:40.118413 1804089 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:27:40.132287 1804089 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:27:40.132316 1804089 kubeadm.go:157] found existing configuration files:
	
	I1018 15:27:40.132380 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:27:40.146747 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:27:40.146824 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:27:40.160242 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:27:40.176021 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:27:40.176108 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:27:40.193586 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:27:40.207559 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:27:40.207642 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:27:40.223303 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:27:40.236077 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:27:40.236157 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:27:40.251952 1804089 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 15:27:40.316528 1804089 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:27:40.316604 1804089 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:27:40.434690 1804089 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:27:40.434902 1804089 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:27:40.435054 1804089 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:27:40.459518 1804089 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:27:39.756391 1803906 pod_ready.go:94] pod "etcd-pause-153767" is "Ready"
	I1018 15:27:39.756427 1803906 pod_ready.go:86] duration metric: took 11.008679807s for pod "etcd-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.759320 1803906 pod_ready.go:83] waiting for pod "kube-apiserver-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.765926 1803906 pod_ready.go:94] pod "kube-apiserver-pause-153767" is "Ready"
	I1018 15:27:39.765962 1803906 pod_ready.go:86] duration metric: took 6.601881ms for pod "kube-apiserver-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.768524 1803906 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.014968 1803906 pod_ready.go:94] pod "kube-controller-manager-pause-153767" is "Ready"
	I1018 15:27:41.015003 1803906 pod_ready.go:86] duration metric: took 1.246451729s for pod "kube-controller-manager-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.018318 1803906 pod_ready.go:83] waiting for pod "kube-proxy-nk7dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.024126 1803906 pod_ready.go:94] pod "kube-proxy-nk7dv" is "Ready"
	I1018 15:27:41.024159 1803906 pod_ready.go:86] duration metric: took 5.802503ms for pod "kube-proxy-nk7dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.026900 1803906 pod_ready.go:83] waiting for pod "kube-scheduler-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.031844 1803906 pod_ready.go:94] pod "kube-scheduler-pause-153767" is "Ready"
	I1018 15:27:41.031879 1803906 pod_ready.go:86] duration metric: took 4.943147ms for pod "kube-scheduler-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.031894 1803906 pod_ready.go:40] duration metric: took 14.806223688s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:27:41.086118 1803906 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:27:41.177844 1803906 out.go:179] * Done! kubectl is now configured to use "pause-153767" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.161558411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801262161521386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5619de72-1140-4450-9707-518366586aec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.162966517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f0d6bd1-e485-4437-a1f7-87c832020dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.163043285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f0d6bd1-e485-4437-a1f7-87c832020dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.163314805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f0d6bd1-e485-4437-a1f7-87c832020dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.227342764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee3bae94-6071-4713-aef8-d92a79cdd998 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.227478538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee3bae94-6071-4713-aef8-d92a79cdd998 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.229984904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5358fe2f-65f0-4dc8-b417-fc2fe05455bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.230434353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801262230411827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5358fe2f-65f0-4dc8-b417-fc2fe05455bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.231445660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9079c3ed-9e5e-420f-b404-d90928eef2f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.231585466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9079c3ed-9e5e-420f-b404-d90928eef2f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.232157201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9079c3ed-9e5e-420f-b404-d90928eef2f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.297044599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c900155b-d5af-4fd8-9d52-86d6b2aec6aa name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.297138815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c900155b-d5af-4fd8-9d52-86d6b2aec6aa name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.301313634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfe654fe-97df-4713-beec-209eda9f164d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.301759104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801262301701438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfe654fe-97df-4713-beec-209eda9f164d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.303072099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c01911c5-1daa-4d87-8412-7d6476896264 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.303319456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c01911c5-1daa-4d87-8412-7d6476896264 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.303878438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c01911c5-1daa-4d87-8412-7d6476896264 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.365090431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e15196ea-42be-4d5f-b561-3362ffc153f6 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.365172543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e15196ea-42be-4d5f-b561-3362ffc153f6 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.367378933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44fe1f7b-99f1-4911-9d1d-adf6432bf0a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.368181475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801262368152884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44fe1f7b-99f1-4911-9d1d-adf6432bf0a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.369213091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c70a631-a292-4600-bf21-87c4d5a29284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.369618954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c70a631-a292-4600-bf21-87c4d5a29284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:42 pause-153767 crio[3323]: time="2025-10-18 15:27:42.370361577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c70a631-a292-4600-bf21-87c4d5a29284 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	21a568cafd8fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago       Running             coredns                   1                   fd49bed045f21       coredns-66bc5c9577-2ztp2
	bea487e2380ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   18 seconds ago       Running             kube-proxy                2                   e4a4c1ff54067       kube-proxy-nk7dv
	b5466f1480dc7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      2                   dc2e696af7db1       etcd-pause-153767
	4d47c959b5afb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            2                   602f1c56f8fbb       kube-apiserver-pause-153767
	866b3859d3435       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            2                   7f795ccd6e6a1       kube-scheduler-pause-153767
	c9cecb5df4eb4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   2                   b0dcbe14650b8       kube-controller-manager-pause-153767
	7c34cce37ad90       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   28 seconds ago       Exited              kube-proxy                1                   4856f934ceee0       kube-proxy-nk7dv
	6def6b6b8e277       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   28 seconds ago       Exited              kube-apiserver            1                   86dcc162895b2       kube-apiserver-pause-153767
	ab6f334e8dc36       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   28 seconds ago       Exited              etcd                      1                   cef67b7a71988       etcd-pause-153767
	2867fda4fb320       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   29 seconds ago       Exited              kube-controller-manager   1                   d1740c9e8fae0       kube-controller-manager-pause-153767
	cb528bc32dbb1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   29 seconds ago       Exited              kube-scheduler            1                   866056d445c9b       kube-scheduler-pause-153767
	5b5b5aaf2a532       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   211f39e68c4ad       coredns-66bc5c9577-2ztp2
	
	
	==> coredns [21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40230 - 31640 "HINFO IN 3268297221603888393.2832899248348407764. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.086159951s
	
	
	==> coredns [5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59356 - 43908 "HINFO IN 6043093980954816288.5690242199728248812. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110759548s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-153767
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-153767
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-153767
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_26_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:26:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-153767
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:27:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.16
	  Hostname:    pause-153767
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc144a18922648ed9cfb63f6115e22f7
	  System UUID:                fc144a18-9226-48ed-9cfb-63f6115e22f7
	  Boot ID:                    3df6d59d-e4a1-4c3a-8504-85f1d554a509
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2ztp2                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     79s
	  kube-system                 etcd-pause-153767                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-153767             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-153767    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-nk7dv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-153767             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x7 over 92s)  kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    84s                kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s                kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     84s                kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeReady                84s                kubelet          Node pause-153767 status is now: NodeReady
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           80s                node-controller  Node pause-153767 event: Registered Node pause-153767 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-153767 event: Registered Node pause-153767 in Controller
	
	
	==> dmesg <==
	[Oct18 15:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000063] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002382] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.202574] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090250] kauditd_printk_skb: 1 callbacks suppressed
	[Oct18 15:26] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.124465] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.173081] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.358488] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.672523] kauditd_printk_skb: 218 callbacks suppressed
	[Oct18 15:27] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.173109] kauditd_printk_skb: 410 callbacks suppressed
	[  +4.709866] kauditd_printk_skb: 112 callbacks suppressed
	
	
	==> etcd [ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059] <==
	
	
	==> etcd [b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d] <==
	{"level":"warn","ts":"2025-10-18T15:27:22.657941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.676037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.699299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.721057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.735013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.753711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.772623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.788094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.862962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:27:40.199723Z","caller":"traceutil/trace.go:172","msg":"trace[1769768384] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"394.074399ms","start":"2025-10-18T15:27:39.805629Z","end":"2025-10-18T15:27:40.199704Z","steps":["trace[1769768384] 'process raft request'  (duration: 393.917731ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:40.200445Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:39.805605Z","time spent":"394.304606ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6605,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" mod_revision:449 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" value_size:6534 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" > >"}
	{"level":"warn","ts":"2025-10-18T15:27:40.745118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.021438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" limit:1 ","response":"range_response_count:1 size:6620"}
	{"level":"info","ts":"2025-10-18T15:27:40.745206Z","caller":"traceutil/trace.go:172","msg":"trace[1702561045] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-153767; range_end:; response_count:1; response_revision:526; }","duration":"481.12461ms","start":"2025-10-18T15:27:40.264068Z","end":"2025-10-18T15:27:40.745193Z","steps":["trace[1702561045] 'range keys from in-memory index tree'  (duration: 480.960522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:40.745238Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:40.264048Z","time spent":"481.18223ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":6643,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T15:27:40.745445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"451.205318ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:40.745470Z","caller":"traceutil/trace.go:172","msg":"trace[803117120] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:526; }","duration":"451.23271ms","start":"2025-10-18T15:27:40.294230Z","end":"2025-10-18T15:27:40.745463Z","steps":["trace[803117120] 'range keys from in-memory index tree'  (duration: 451.179215ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"324.685498ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5902136601809966205 > lease_revoke:<id:51e899f7eeb6d32d>","response":"size:28"}
	{"level":"info","ts":"2025-10-18T15:27:41.004118Z","caller":"traceutil/trace.go:172","msg":"trace[1460180529] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"300.967113ms","start":"2025-10-18T15:27:40.703141Z","end":"2025-10-18T15:27:41.004108Z","steps":["trace[1460180529] 'read index received'  (duration: 27.229µs)","trace[1460180529] 'applied index is now lower than readState.Index'  (duration: 300.939202ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:27:41.004216Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.092477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:41.004230Z","caller":"traceutil/trace.go:172","msg":"trace[601817914] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:526; }","duration":"301.113352ms","start":"2025-10-18T15:27:40.703112Z","end":"2025-10-18T15:27:41.004225Z","steps":["trace[601817914] 'agreement among raft nodes before linearized reading'  (duration: 301.072786ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004249Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:40.703096Z","time spent":"301.148665ms","remote":"127.0.0.1:53168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-18T15:27:41.004425Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.927262ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:41.004440Z","caller":"traceutil/trace.go:172","msg":"trace[237818036] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:526; }","duration":"258.943584ms","start":"2025-10-18T15:27:40.745492Z","end":"2025-10-18T15:27:41.004436Z","steps":["trace[237818036] 'agreement among raft nodes before linearized reading'  (duration: 258.919132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.361745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-153767\" limit:1 ","response":"range_response_count:1 size:5279"}
	{"level":"info","ts":"2025-10-18T15:27:41.004918Z","caller":"traceutil/trace.go:172","msg":"trace[738410035] range","detail":"{range_begin:/registry/minions/pause-153767; range_end:; response_count:1; response_revision:526; }","duration":"255.485855ms","start":"2025-10-18T15:27:40.749420Z","end":"2025-10-18T15:27:41.004906Z","steps":["trace[738410035] 'agreement among raft nodes before linearized reading'  (duration: 255.243828ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:27:42 up 2 min,  0 users,  load average: 1.17, 0.47, 0.17
	Linux pause-153767 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25] <==
	I1018 15:27:23.637801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 15:27:23.637967       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:27:23.638072       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:27:23.638124       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:27:23.655508       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:27:23.655688       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:27:23.655771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:27:23.655875       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:27:23.663932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:27:23.664312       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:27:23.694394       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:27:23.704418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:27:23.704510       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 15:27:23.704528       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 15:27:23.704547       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 15:27:23.710063       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:27:23.772642       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:27:24.498375       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:27:25.672456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:27:25.758664       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:27:25.820678       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:27:25.832449       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:27:27.065036       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:27:27.321585       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:27:27.365020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331] <==
	
	
	==> kube-controller-manager [2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c] <==
	
	
	==> kube-controller-manager [c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2] <==
	I1018 15:27:26.972241       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:27:26.976970       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:27:26.980213       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:27:26.981551       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:27:26.989282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:27:26.991463       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:27:26.998244       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:27:27.000644       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:27:27.004125       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:27:27.004277       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:27:27.004342       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:27:27.011302       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 15:27:27.011463       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 15:27:27.011397       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:27:27.011414       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:27:27.012339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:27:27.012589       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:27:27.011384       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:27:27.012738       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 15:27:27.013911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 15:27:27.016137       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:27:27.019424       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:27:27.019581       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:27:27.022944       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:27:27.027276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb] <==
	
	
	==> kube-proxy [bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500] <==
	I1018 15:27:24.270081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:27:24.371113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:27:24.371154       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.16"]
	E1018 15:27:24.371240       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:27:24.418174       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 15:27:24.418389       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 15:27:24.418460       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:27:24.431400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:27:24.431869       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:27:24.431968       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:27:24.437682       1 config.go:200] "Starting service config controller"
	I1018 15:27:24.437723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:27:24.437740       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:27:24.437744       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:27:24.437763       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:27:24.437767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:27:24.438606       1 config.go:309] "Starting node config controller"
	I1018 15:27:24.438640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:27:24.438647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:27:24.539558       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:27:24.539626       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:27:24.539666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e] <==
	I1018 15:27:21.766380       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:27:23.536973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:27:23.537019       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:27:23.537036       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:27:23.537043       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:27:23.610754       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:27:23.611019       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:27:23.620945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:27:23.631255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:27:23.634767       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:27:23.632976       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:27:23.735734       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13] <==
	
	
	==> kubelet <==
	Oct 18 15:27:21 pause-153767 kubelet[3971]: E1018 15:27:21.018095    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:21 pause-153767 kubelet[3971]: E1018 15:27:21.027650    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:21 pause-153767 kubelet[3971]: I1018 15:27:21.352696    3971 kubelet_node_status.go:75] "Attempting to register node" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.030149    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.030549    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.031987    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.034095    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.036717    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.038103    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.038899    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.685215    3971 apiserver.go:52] "Watching apiserver"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691464    3971 kubelet_node_status.go:124] "Node was previously registered" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691550    3971 kubelet_node_status.go:78] "Successfully registered node" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691582    3971 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.694915    3971 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.718587    3971 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.757309    3971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95bf3faf-25ed-4469-9495-c37a4b55623b-xtables-lock\") pod \"kube-proxy-nk7dv\" (UID: \"95bf3faf-25ed-4469-9495-c37a4b55623b\") " pod="kube-system/kube-proxy-nk7dv"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.757471    3971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95bf3faf-25ed-4469-9495-c37a4b55623b-lib-modules\") pod \"kube-proxy-nk7dv\" (UID: \"95bf3faf-25ed-4469-9495-c37a4b55623b\") " pod="kube-system/kube-proxy-nk7dv"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.996132    3971 scope.go:117] "RemoveContainer" containerID="7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb"
	Oct 18 15:27:26 pause-153767 kubelet[3971]: I1018 15:27:26.078247    3971 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:27:28 pause-153767 kubelet[3971]: I1018 15:27:28.677707    3971 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:27:29 pause-153767 kubelet[3971]: E1018 15:27:29.846270    3971 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760801249845493092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:29 pause-153767 kubelet[3971]: E1018 15:27:29.846294    3971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760801249845493092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:39 pause-153767 kubelet[3971]: E1018 15:27:39.850469    3971 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760801259849360985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:39 pause-153767 kubelet[3971]: E1018 15:27:39.850619    3971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760801259849360985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-153767 -n pause-153767
helpers_test.go:269: (dbg) Run:  kubectl --context pause-153767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-153767 -n pause-153767
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-153767 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-153767 logs -n 25: (1.918656917s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cert-options-155388                                                                                                                                             │ cert-options-155388       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p cert-expiration-486593 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-486593    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ delete  │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ ssh     │ force-systemd-flag-261740 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-261740 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ delete  │ -p force-systemd-flag-261740                                                                                                                                       │ force-systemd-flag-261740 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:25 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-607040 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-607040    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │                     │
	│ delete  │ -p running-upgrade-607040                                                                                                                                          │ running-upgrade-607040    │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:24 UTC │
	│ start   │ -p pause-153767 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-153767              │ jenkins │ v1.37.0 │ 18 Oct 25 15:24 UTC │ 18 Oct 25 15:27 UTC │
	│ ssh     │ -p NoKubernetes-479967 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │                     │
	│ stop    │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:25 UTC │
	│ start   │ -p NoKubernetes-479967 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:26 UTC │
	│ stop    │ -p kubernetes-upgrade-075048                                                                                                                                       │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:25 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:25 UTC │ 18 Oct 25 15:26 UTC │
	│ ssh     │ -p NoKubernetes-479967 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │                     │
	│ delete  │ -p NoKubernetes-479967                                                                                                                                             │ NoKubernetes-479967       │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:26 UTC │
	│ start   │ -p stopped-upgrade-646879 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-646879    │ jenkins │ v1.32.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │                     │
	│ start   │ -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:26 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p pause-153767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-153767              │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ delete  │ -p kubernetes-upgrade-075048                                                                                                                                       │ kubernetes-upgrade-075048 │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p auto-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-320866               │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │                     │
	│ stop    │ stopped-upgrade-646879 stop                                                                                                                                        │ stopped-upgrade-646879    │ jenkins │ v1.32.0 │ 18 Oct 25 15:27 UTC │ 18 Oct 25 15:27 UTC │
	│ start   │ -p stopped-upgrade-646879 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-646879    │ jenkins │ v1.37.0 │ 18 Oct 25 15:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:27:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:27:19.547324 1804300 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:27:19.547664 1804300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:19.547676 1804300 out.go:374] Setting ErrFile to fd 2...
	I1018 15:27:19.547684 1804300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:27:19.547995 1804300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:27:19.548688 1804300 out.go:368] Setting JSON to false
	I1018 15:27:19.550121 1804300 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25788,"bootTime":1760775452,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:27:19.550273 1804300 start.go:141] virtualization: kvm guest
	I1018 15:27:19.554529 1804300 out.go:179] * [stopped-upgrade-646879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:27:19.556205 1804300 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:27:19.556198 1804300 notify.go:220] Checking for updates...
	I1018 15:27:19.558768 1804300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:27:19.560142 1804300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:27:19.561400 1804300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 15:27:19.562677 1804300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:27:19.564000 1804300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:27:19.565900 1804300 config.go:182] Loaded profile config "stopped-upgrade-646879": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 15:27:19.566411 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:19.566497 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:19.582456 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I1018 15:27:19.583073 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:19.583722 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:19.583747 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:19.584203 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:19.584443 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:19.586405 1804300 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 15:27:19.587833 1804300 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:27:19.588392 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:19.588466 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:19.604287 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1018 15:27:19.604824 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:19.605449 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:19.605482 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:19.606005 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:19.606244 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:19.646239 1804300 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 15:27:19.647414 1804300 start.go:305] selected driver: kvm2
	I1018 15:27:19.647435 1804300 start.go:925] validating driver "kvm2" against &{Name:stopped-upgrade-646879 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-646
879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:19.647580 1804300 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:27:19.648657 1804300 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:19.648758 1804300 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:19.668378 1804300 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:19.668414 1804300 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 15:27:19.684672 1804300 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 15:27:19.685244 1804300 cni.go:84] Creating CNI manager for ""
	I1018 15:27:19.685317 1804300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:19.685399 1804300 start.go:349] cluster config:
	{Name:stopped-upgrade-646879 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-646879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:19.685524 1804300 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:27:19.687333 1804300 out.go:179] * Starting "stopped-upgrade-646879" primary control-plane node in "stopped-upgrade-646879" cluster
	I1018 15:27:17.280323 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:17.281221 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:17.281267 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:17.281611 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:17.281638 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:17.281592 1804118 retry.go:31] will retry after 914.464617ms: waiting for domain to come up
	I1018 15:27:18.197975 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:18.198641 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:18.198670 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:18.199118 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:18.199165 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:18.199066 1804118 retry.go:31] will retry after 1.001827107s: waiting for domain to come up
	I1018 15:27:19.202905 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:19.203521 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:19.203550 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:19.203867 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:19.203920 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:19.203849 1804118 retry.go:31] will retry after 1.834659839s: waiting for domain to come up
	I1018 15:27:21.041079 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:21.041933 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:21.041958 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:21.042315 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:21.042352 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:21.042300 1804118 retry.go:31] will retry after 1.711084821s: waiting for domain to come up
	I1018 15:27:19.546327 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:19.638468 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:19.730744 1803906 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:27:19.730858 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.231671 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.731033 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:20.763587 1803906 api_server.go:72] duration metric: took 1.032858692s to wait for apiserver process to appear ...
	I1018 15:27:20.763622 1803906 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:27:20.763648 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.537090 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:27:23.537133 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:27:23.537155 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.587002 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 15:27:23.587038 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 15:27:23.764445 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:23.781482 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:23.781527 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:19.689591 1804300 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 15:27:19.689654 1804300 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1018 15:27:19.689667 1804300 cache.go:58] Caching tarball of preloaded images
	I1018 15:27:19.689824 1804300 preload.go:233] Found /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:27:19.689850 1804300 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1018 15:27:19.690022 1804300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/stopped-upgrade-646879/config.json ...
	I1018 15:27:19.690330 1804300 start.go:360] acquireMachinesLock for stopped-upgrade-646879: {Name:mkd96faf82baee5d117338197f9c6cbf4f45de94 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 15:27:24.264415 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:24.273237 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:24.273279 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:24.764543 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:24.771820 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:27:24.771861 1803906 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:27:25.264577 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:25.271163 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 15:27:25.279606 1803906 api_server.go:141] control plane version: v1.34.1
	I1018 15:27:25.279637 1803906 api_server.go:131] duration metric: took 4.51600683s to wait for apiserver health ...
	I1018 15:27:25.279647 1803906 cni.go:84] Creating CNI manager for ""
	I1018 15:27:25.279654 1803906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:25.281654 1803906 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 15:27:25.283203 1803906 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 15:27:25.300424 1803906 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 15:27:25.337641 1803906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:27:25.344935 1803906 system_pods.go:59] 6 kube-system pods found
	I1018 15:27:25.344985 1803906 system_pods.go:61] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:25.344996 1803906 system_pods.go:61] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:25.345010 1803906 system_pods.go:61] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:25.345020 1803906 system_pods.go:61] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:25.345026 1803906 system_pods.go:61] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:25.345034 1803906 system_pods.go:61] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:25.345042 1803906 system_pods.go:74] duration metric: took 7.36436ms to wait for pod list to return data ...
	I1018 15:27:25.345052 1803906 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:27:25.351733 1803906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:27:25.351777 1803906 node_conditions.go:123] node cpu capacity is 2
	I1018 15:27:25.351798 1803906 node_conditions.go:105] duration metric: took 6.739141ms to run NodePressure ...
	I1018 15:27:25.351872 1803906 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 15:27:25.851684 1803906 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 15:27:25.855882 1803906 kubeadm.go:743] kubelet initialised
	I1018 15:27:25.855910 1803906 kubeadm.go:744] duration metric: took 4.196459ms waiting for restarted kubelet to initialise ...
	I1018 15:27:25.855932 1803906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:27:25.876350 1803906 ops.go:34] apiserver oom_adj: -16
	I1018 15:27:25.876383 1803906 kubeadm.go:601] duration metric: took 9.048616961s to restartPrimaryControlPlane
	I1018 15:27:25.876399 1803906 kubeadm.go:402] duration metric: took 9.215824328s to StartCluster
	I1018 15:27:25.876426 1803906 settings.go:142] acquiring lock: {Name:mkc4a015ef1628793f35d59d734503738678fa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:25.876549 1803906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:27:25.877493 1803906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/kubeconfig: {Name:mkd0359d239071160661347e1005ef052a3265ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:25.877779 1803906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:27:25.877892 1803906 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:27:25.878047 1803906 config.go:182] Loaded profile config "pause-153767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:25.879466 1803906 out.go:179] * Verifying Kubernetes components...
	I1018 15:27:25.880271 1803906 out.go:179] * Enabled addons: 
	I1018 15:27:22.755016 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:22.755877 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:22.755909 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:22.756221 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:22.756287 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:22.756222 1804118 retry.go:31] will retry after 1.995548146s: waiting for domain to come up
	I1018 15:27:24.753971 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:24.754798 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:24.754840 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:24.755298 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:24.755326 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:24.755255 1804118 retry.go:31] will retry after 2.879345962s: waiting for domain to come up
	I1018 15:27:25.881132 1803906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:25.881721 1803906 addons.go:514] duration metric: took 3.843381ms for enable addons: enabled=[]
	I1018 15:27:26.100881 1803906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:27:26.129496 1803906 node_ready.go:35] waiting up to 6m0s for node "pause-153767" to be "Ready" ...
	I1018 15:27:26.132758 1803906 node_ready.go:49] node "pause-153767" is "Ready"
	I1018 15:27:26.132803 1803906 node_ready.go:38] duration metric: took 3.249227ms for node "pause-153767" to be "Ready" ...
	I1018 15:27:26.132822 1803906 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:27:26.132876 1803906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:27:26.154485 1803906 api_server.go:72] duration metric: took 276.662725ms to wait for apiserver process to appear ...
	I1018 15:27:26.154522 1803906 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:27:26.154548 1803906 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 15:27:26.161194 1803906 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 15:27:26.162338 1803906 api_server.go:141] control plane version: v1.34.1
	I1018 15:27:26.162380 1803906 api_server.go:131] duration metric: took 7.848496ms to wait for apiserver health ...
	I1018 15:27:26.162391 1803906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:27:26.172265 1803906 system_pods.go:59] 6 kube-system pods found
	I1018 15:27:26.172308 1803906 system_pods.go:61] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:26.172319 1803906 system_pods.go:61] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:26.172329 1803906 system_pods.go:61] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:26.172338 1803906 system_pods.go:61] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:26.172363 1803906 system_pods.go:61] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:26.172372 1803906 system_pods.go:61] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:26.172385 1803906 system_pods.go:74] duration metric: took 9.985111ms to wait for pod list to return data ...
	I1018 15:27:26.172399 1803906 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:27:26.181955 1803906 default_sa.go:45] found service account: "default"
	I1018 15:27:26.181984 1803906 default_sa.go:55] duration metric: took 9.575988ms for default service account to be created ...
	I1018 15:27:26.181993 1803906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:27:26.187464 1803906 system_pods.go:86] 6 kube-system pods found
	I1018 15:27:26.187498 1803906 system_pods.go:89] "coredns-66bc5c9577-2ztp2" [e28f3cfe-ccea-418b-9644-100bb187e0ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:27:26.187506 1803906 system_pods.go:89] "etcd-pause-153767" [e1e2000c-d638-4a7e-9a10-c3120680ad8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:27:26.187513 1803906 system_pods.go:89] "kube-apiserver-pause-153767" [d065308c-ddc3-4717-8cb7-63ee0628dab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:27:26.187521 1803906 system_pods.go:89] "kube-controller-manager-pause-153767" [b6861daf-7ee2-4568-9b70-20a7a8574fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:27:26.187529 1803906 system_pods.go:89] "kube-proxy-nk7dv" [95bf3faf-25ed-4469-9495-c37a4b55623b] Running
	I1018 15:27:26.187539 1803906 system_pods.go:89] "kube-scheduler-pause-153767" [42ad4c8d-f8b0-448f-8f56-f34638904eb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:27:26.187551 1803906 system_pods.go:126] duration metric: took 5.551512ms to wait for k8s-apps to be running ...
	I1018 15:27:26.187568 1803906 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:27:26.187642 1803906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:27:26.210844 1803906 system_svc.go:56] duration metric: took 23.26473ms WaitForService to wait for kubelet
	I1018 15:27:26.210880 1803906 kubeadm.go:586] duration metric: took 333.065807ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:27:26.210900 1803906 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:27:26.215816 1803906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 15:27:26.215849 1803906 node_conditions.go:123] node cpu capacity is 2
	I1018 15:27:26.215872 1803906 node_conditions.go:105] duration metric: took 4.958387ms to run NodePressure ...
	I1018 15:27:26.215888 1803906 start.go:241] waiting for startup goroutines ...
	I1018 15:27:26.215898 1803906 start.go:246] waiting for cluster config update ...
	I1018 15:27:26.215913 1803906 start.go:255] writing updated cluster config ...
	I1018 15:27:26.216232 1803906 ssh_runner.go:195] Run: rm -f paused
	I1018 15:27:26.225625 1803906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:27:26.226318 1803906 kapi.go:59] client config for pause-153767: &rest.Config{Host:"https://192.168.72.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/pause-153767/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:27:26.230664 1803906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2ztp2" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:27:28.237299 1803906 pod_ready.go:104] pod "coredns-66bc5c9577-2ztp2" is not "Ready", error: <nil>
	I1018 15:27:28.743114 1803906 pod_ready.go:94] pod "coredns-66bc5c9577-2ztp2" is "Ready"
	I1018 15:27:28.743147 1803906 pod_ready.go:86] duration metric: took 2.512446816s for pod "coredns-66bc5c9577-2ztp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:28.747723 1803906 pod_ready.go:83] waiting for pod "etcd-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:27.636808 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:27.637444 1804089 main.go:141] libmachine: (auto-320866) DBG | no network interface addresses found for domain auto-320866 (source=lease)
	I1018 15:27:27.637475 1804089 main.go:141] libmachine: (auto-320866) DBG | trying to list again with source=arp
	I1018 15:27:27.637791 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find current IP address of domain auto-320866 in network mk-auto-320866 (interfaces detected: [])
	I1018 15:27:27.637843 1804089 main.go:141] libmachine: (auto-320866) DBG | I1018 15:27:27.637784 1804118 retry.go:31] will retry after 3.111244006s: waiting for domain to come up
	I1018 15:27:30.752642 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.753415 1804089 main.go:141] libmachine: (auto-320866) found domain IP: 192.168.39.149
	I1018 15:27:30.753449 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has current primary IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.753458 1804089 main.go:141] libmachine: (auto-320866) reserving static IP address...
	I1018 15:27:30.753949 1804089 main.go:141] libmachine: (auto-320866) DBG | unable to find host DHCP lease matching {name: "auto-320866", mac: "52:54:00:f3:b9:cb", ip: "192.168.39.149"} in network mk-auto-320866
	I1018 15:27:30.966376 1804089 main.go:141] libmachine: (auto-320866) DBG | Getting to WaitForSSH function...
	I1018 15:27:30.966411 1804089 main.go:141] libmachine: (auto-320866) reserved static IP address 192.168.39.149 for domain auto-320866
	I1018 15:27:30.966424 1804089 main.go:141] libmachine: (auto-320866) waiting for SSH...
	I1018 15:27:30.969972 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.970532 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:30.970582 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:30.970746 1804089 main.go:141] libmachine: (auto-320866) DBG | Using SSH client type: external
	I1018 15:27:30.970775 1804089 main.go:141] libmachine: (auto-320866) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa (-rw-------)
	I1018 15:27:30.970847 1804089 main.go:141] libmachine: (auto-320866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:27:30.970874 1804089 main.go:141] libmachine: (auto-320866) DBG | About to run SSH command:
	I1018 15:27:30.970899 1804089 main.go:141] libmachine: (auto-320866) DBG | exit 0
	I1018 15:27:31.106387 1804089 main.go:141] libmachine: (auto-320866) DBG | SSH cmd err, output: <nil>: 
	I1018 15:27:31.106790 1804089 main.go:141] libmachine: (auto-320866) domain creation complete
	I1018 15:27:31.107264 1804089 main.go:141] libmachine: (auto-320866) Calling .GetConfigRaw
	I1018 15:27:31.108119 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:31.108375 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:31.108566 1804089 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 15:27:31.108579 1804089 main.go:141] libmachine: (auto-320866) Calling .GetState
	I1018 15:27:31.110102 1804089 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 15:27:31.110115 1804089 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 15:27:31.110120 1804089 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 15:27:31.110125 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.113024 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.113411 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.113450 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.113649 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.113837 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.113984 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.114133 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.114287 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.114619 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.114633 1804089 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 15:27:31.228980 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:27:31.229009 1804089 main.go:141] libmachine: Detecting the provisioner...
	I1018 15:27:31.229016 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.232831 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.233245 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.233268 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.233523 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.233804 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.234031 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.234183 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.234404 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.234640 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.234683 1804089 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 15:27:31.349841 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 15:27:31.349915 1804089 main.go:141] libmachine: found compatible host: buildroot
	I1018 15:27:31.349922 1804089 main.go:141] libmachine: Provisioning with buildroot...
	I1018 15:27:31.349930 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.350219 1804089 buildroot.go:166] provisioning hostname "auto-320866"
	I1018 15:27:31.350260 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.350522 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.353648 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.354107 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.354130 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.354422 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.354633 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.354796 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.354978 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.355179 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.355467 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.355482 1804089 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-320866 && echo "auto-320866" | sudo tee /etc/hostname
	I1018 15:27:31.487801 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-320866
	
	I1018 15:27:31.487834 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.491465 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.491885 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.491908 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.492181 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:31.492414 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.492594 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:31.492750 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:31.492930 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:31.493131 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:31.493146 1804089 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-320866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-320866/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-320866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:27:31.637675 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:27:31.637712 1804089 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1755824/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1755824/.minikube}
	I1018 15:27:31.637781 1804089 buildroot.go:174] setting up certificates
	I1018 15:27:31.637794 1804089 provision.go:84] configureAuth start
	I1018 15:27:31.637811 1804089 main.go:141] libmachine: (auto-320866) Calling .GetMachineName
	I1018 15:27:31.638187 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:31.641456 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.641856 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.641883 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.642072 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:31.644886 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.645303 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:31.645332 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:31.645496 1804089 provision.go:143] copyHostCerts
	I1018 15:27:31.645562 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem, removing ...
	I1018 15:27:31.645587 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem
	I1018 15:27:31.645703 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.pem (1082 bytes)
	I1018 15:27:31.645856 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem, removing ...
	I1018 15:27:31.645868 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem
	I1018 15:27:31.645914 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/cert.pem (1123 bytes)
	I1018 15:27:31.646018 1804089 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem, removing ...
	I1018 15:27:31.646029 1804089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem
	I1018 15:27:31.646067 1804089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1755824/.minikube/key.pem (1675 bytes)
	I1018 15:27:31.646148 1804089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem org=jenkins.auto-320866 san=[127.0.0.1 192.168.39.149 auto-320866 localhost minikube]
	I1018 15:27:32.880890 1804300 start.go:364] duration metric: took 13.190435497s to acquireMachinesLock for "stopped-upgrade-646879"
	I1018 15:27:32.880940 1804300 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:27:32.880949 1804300 fix.go:54] fixHost starting: 
	I1018 15:27:32.881400 1804300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:27:32.881459 1804300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:27:32.899795 1804300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I1018 15:27:32.900486 1804300 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:27:32.901136 1804300 main.go:141] libmachine: Using API Version  1
	I1018 15:27:32.901164 1804300 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:27:32.901687 1804300 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:27:32.901960 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	I1018 15:27:32.902156 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .GetState
	I1018 15:27:32.904421 1804300 fix.go:112] recreateIfNeeded on stopped-upgrade-646879: state=Stopped err=<nil>
	I1018 15:27:32.904471 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .DriverName
	W1018 15:27:32.904659 1804300 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 15:27:30.756176 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	W1018 15:27:33.254387 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	I1018 15:27:32.907482 1804300 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-646879" ...
	I1018 15:27:32.907524 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Calling .Start
	I1018 15:27:32.907738 1804300 main.go:141] libmachine: (stopped-upgrade-646879) starting domain...
	I1018 15:27:32.907760 1804300 main.go:141] libmachine: (stopped-upgrade-646879) ensuring networks are active...
	I1018 15:27:32.908660 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Ensuring network default is active
	I1018 15:27:32.909215 1804300 main.go:141] libmachine: (stopped-upgrade-646879) Ensuring network mk-stopped-upgrade-646879 is active
	I1018 15:27:32.909825 1804300 main.go:141] libmachine: (stopped-upgrade-646879) getting domain XML...
	I1018 15:27:32.911026 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | starting domain XML:
	I1018 15:27:32.911051 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | <domain type='kvm'>
	I1018 15:27:32.911063 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <name>stopped-upgrade-646879</name>
	I1018 15:27:32.911077 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <uuid>47eaba17-7efa-45cc-afb5-18fbd25cf505</uuid>
	I1018 15:27:32.911087 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 15:27:32.911099 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 15:27:32.911108 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 15:27:32.911118 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <os>
	I1018 15:27:32.911143 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 15:27:32.911158 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <boot dev='cdrom'/>
	I1018 15:27:32.911170 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <boot dev='hd'/>
	I1018 15:27:32.911177 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <bootmenu enable='no'/>
	I1018 15:27:32.911189 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </os>
	I1018 15:27:32.911199 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <features>
	I1018 15:27:32.911216 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <acpi/>
	I1018 15:27:32.911228 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <apic/>
	I1018 15:27:32.911257 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <pae/>
	I1018 15:27:32.911281 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </features>
	I1018 15:27:32.911301 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 15:27:32.911312 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <clock offset='utc'/>
	I1018 15:27:32.911323 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 15:27:32.911352 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_reboot>restart</on_reboot>
	I1018 15:27:32.911366 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <on_crash>destroy</on_crash>
	I1018 15:27:32.911373 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   <devices>
	I1018 15:27:32.911382 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 15:27:32.911389 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <disk type='file' device='cdrom'>
	I1018 15:27:32.911404 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <driver name='qemu' type='raw'/>
	I1018 15:27:32.911419 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/boot2docker.iso'/>
	I1018 15:27:32.911433 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 15:27:32.911441 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <readonly/>
	I1018 15:27:32.911452 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 15:27:32.911462 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </disk>
	I1018 15:27:32.911471 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <disk type='file' device='disk'>
	I1018 15:27:32.911479 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 15:27:32.911491 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source file='/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/stopped-upgrade-646879.rawdisk'/>
	I1018 15:27:32.911499 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target dev='hda' bus='virtio'/>
	I1018 15:27:32.911509 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 15:27:32.911530 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </disk>
	I1018 15:27:32.911544 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 15:27:32.911557 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 15:27:32.911565 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </controller>
	I1018 15:27:32.911578 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 15:27:32.911597 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 15:27:32.911606 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 15:27:32.911614 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </controller>
	I1018 15:27:32.911621 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <interface type='network'>
	I1018 15:27:32.911629 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <mac address='52:54:00:c3:85:7d'/>
	I1018 15:27:32.911637 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source network='mk-stopped-upgrade-646879'/>
	I1018 15:27:32.911676 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <model type='virtio'/>
	I1018 15:27:32.911713 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 15:27:32.911729 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </interface>
	I1018 15:27:32.911737 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <interface type='network'>
	I1018 15:27:32.911767 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <mac address='52:54:00:87:ba:01'/>
	I1018 15:27:32.911780 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <source network='default'/>
	I1018 15:27:32.911803 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <model type='virtio'/>
	I1018 15:27:32.911827 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 15:27:32.911840 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </interface>
	I1018 15:27:32.911850 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <serial type='pty'>
	I1018 15:27:32.911860 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target type='isa-serial' port='0'>
	I1018 15:27:32.911870 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |         <model name='isa-serial'/>
	I1018 15:27:32.911879 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       </target>
	I1018 15:27:32.911887 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </serial>
	I1018 15:27:32.911896 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <console type='pty'>
	I1018 15:27:32.911919 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <target type='serial' port='0'/>
	I1018 15:27:32.911927 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </console>
	I1018 15:27:32.911935 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <input type='mouse' bus='ps2'/>
	I1018 15:27:32.911944 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 15:27:32.911951 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <audio id='1' type='none'/>
	I1018 15:27:32.911960 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <memballoon model='virtio'>
	I1018 15:27:32.911970 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 15:27:32.911978 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </memballoon>
	I1018 15:27:32.911990 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     <rng model='virtio'>
	I1018 15:27:32.911999 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <backend model='random'>/dev/random</backend>
	I1018 15:27:32.912009 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 15:27:32.912031 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |     </rng>
	I1018 15:27:32.912042 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG |   </devices>
	I1018 15:27:32.912076 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | </domain>
	I1018 15:27:32.912095 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | 
	I1018 15:27:34.446271 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for domain to start...
	I1018 15:27:34.447699 1804300 main.go:141] libmachine: (stopped-upgrade-646879) domain is now running
	I1018 15:27:34.447742 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for IP...
	I1018 15:27:34.448706 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.449295 1804300 main.go:141] libmachine: (stopped-upgrade-646879) found domain IP: 192.168.50.247
	I1018 15:27:34.449319 1804300 main.go:141] libmachine: (stopped-upgrade-646879) reserving static IP address...
	I1018 15:27:34.449372 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has current primary IP address 192.168.50.247 and MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.449962 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | found host DHCP lease matching {name: "stopped-upgrade-646879", mac: "52:54:00:c3:85:7d", ip: "192.168.50.247"} in network mk-stopped-upgrade-646879: {Iface:virbr3 ExpiryTime:2025-10-18 16:26:48 +0000 UTC Type:0 Mac:52:54:00:c3:85:7d Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:stopped-upgrade-646879 Clientid:01:52:54:00:c3:85:7d}
	I1018 15:27:34.449997 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | skip adding static IP to network mk-stopped-upgrade-646879 - found existing host DHCP lease matching {name: "stopped-upgrade-646879", mac: "52:54:00:c3:85:7d", ip: "192.168.50.247"}
	I1018 15:27:34.450027 1804300 main.go:141] libmachine: (stopped-upgrade-646879) reserved static IP address 192.168.50.247 for domain stopped-upgrade-646879
	I1018 15:27:34.450044 1804300 main.go:141] libmachine: (stopped-upgrade-646879) waiting for SSH...
	I1018 15:27:34.450056 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Getting to WaitForSSH function...
	I1018 15:27:34.453369 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.453879 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:85:7d", ip: ""} in network mk-stopped-upgrade-646879: {Iface:virbr3 ExpiryTime:2025-10-18 16:26:48 +0000 UTC Type:0 Mac:52:54:00:c3:85:7d Iaid: IPaddr:192.168.50.247 Prefix:24 Hostname:stopped-upgrade-646879 Clientid:01:52:54:00:c3:85:7d}
	I1018 15:27:34.453919 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | domain stopped-upgrade-646879 has defined IP address 192.168.50.247 and MAC address 52:54:00:c3:85:7d in network mk-stopped-upgrade-646879
	I1018 15:27:34.454173 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Using SSH client type: external
	I1018 15:27:34.454220 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/id_rsa (-rw-------)
	I1018 15:27:34.454268 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/stopped-upgrade-646879/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 15:27:34.454285 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | About to run SSH command:
	I1018 15:27:34.454304 1804300 main.go:141] libmachine: (stopped-upgrade-646879) DBG | exit 0
	I1018 15:27:32.141682 1804089 provision.go:177] copyRemoteCerts
	I1018 15:27:32.141765 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:27:32.141796 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.144895 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.145322 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.145370 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.145593 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.145881 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.146079 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.146286 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.234398 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:27:32.268808 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 15:27:32.302538 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:27:32.336859 1804089 provision.go:87] duration metric: took 699.045397ms to configureAuth
	I1018 15:27:32.336889 1804089 buildroot.go:189] setting minikube options for container-runtime
	I1018 15:27:32.337109 1804089 config.go:182] Loaded profile config "auto-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:27:32.337220 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.340814 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.341273 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.341303 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.341592 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.341818 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.342062 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.342214 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.342492 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:32.342724 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:32.342750 1804089 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:27:32.605236 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:27:32.605269 1804089 main.go:141] libmachine: Checking connection to Docker...
	I1018 15:27:32.605279 1804089 main.go:141] libmachine: (auto-320866) Calling .GetURL
	I1018 15:27:32.606655 1804089 main.go:141] libmachine: (auto-320866) DBG | using libvirt version 8000000
	I1018 15:27:32.609760 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.610201 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.610228 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.610462 1804089 main.go:141] libmachine: Docker is up and running!
	I1018 15:27:32.610480 1804089 main.go:141] libmachine: Reticulating splines...
	I1018 15:27:32.610488 1804089 client.go:171] duration metric: took 20.372216688s to LocalClient.Create
	I1018 15:27:32.610515 1804089 start.go:167] duration metric: took 20.372288901s to libmachine.API.Create "auto-320866"
	I1018 15:27:32.610526 1804089 start.go:293] postStartSetup for "auto-320866" (driver="kvm2")
	I1018 15:27:32.610536 1804089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:27:32.610560 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.610860 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:27:32.610901 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.613716 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.614091 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.614120 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.614356 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.614578 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.614754 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.614935 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.702477 1804089 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:27:32.708272 1804089 info.go:137] Remote host: Buildroot 2025.02
	I1018 15:27:32.708315 1804089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/addons for local assets ...
	I1018 15:27:32.708401 1804089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1755824/.minikube/files for local assets ...
	I1018 15:27:32.708477 1804089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem -> 17597922.pem in /etc/ssl/certs
	I1018 15:27:32.708620 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:27:32.721109 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:32.754337 1804089 start.go:296] duration metric: took 143.793935ms for postStartSetup
	I1018 15:27:32.754440 1804089 main.go:141] libmachine: (auto-320866) Calling .GetConfigRaw
	I1018 15:27:32.755367 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:32.758406 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.758785 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.758818 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.759074 1804089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/config.json ...
	I1018 15:27:32.759331 1804089 start.go:128] duration metric: took 20.542032797s to createHost
	I1018 15:27:32.759376 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.762139 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.762524 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.762544 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.762771 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.762979 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.763137 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.763286 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.763490 1804089 main.go:141] libmachine: Using SSH client type: native
	I1018 15:27:32.763775 1804089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1018 15:27:32.763791 1804089 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 15:27:32.880669 1804089 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760801252.836292185
	
	I1018 15:27:32.880696 1804089 fix.go:216] guest clock: 1760801252.836292185
	I1018 15:27:32.880703 1804089 fix.go:229] Guest: 2025-10-18 15:27:32.836292185 +0000 UTC Remote: 2025-10-18 15:27:32.759360109 +0000 UTC m=+20.695537015 (delta=76.932076ms)
	I1018 15:27:32.880726 1804089 fix.go:200] guest clock delta is within tolerance: 76.932076ms
	I1018 15:27:32.880731 1804089 start.go:83] releasing machines lock for "auto-320866", held for 20.663540336s
	I1018 15:27:32.880760 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.881153 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:32.884771 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.885266 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.885299 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.885588 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886294 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886529 1804089 main.go:141] libmachine: (auto-320866) Calling .DriverName
	I1018 15:27:32.886645 1804089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:27:32.886727 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.886801 1804089 ssh_runner.go:195] Run: cat /version.json
	I1018 15:27:32.886831 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHHostname
	I1018 15:27:32.891052 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892252 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.892282 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892308 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.892594 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.892840 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.892869 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:32.892896 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:32.893157 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.893161 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHPort
	I1018 15:27:32.893360 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHKeyPath
	I1018 15:27:32.893349 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:32.893522 1804089 main.go:141] libmachine: (auto-320866) Calling .GetSSHUsername
	I1018 15:27:32.893711 1804089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/auto-320866/id_rsa Username:docker}
	I1018 15:27:33.003446 1804089 ssh_runner.go:195] Run: systemctl --version
	I1018 15:27:33.010304 1804089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:27:33.171421 1804089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:27:33.181773 1804089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:27:33.181884 1804089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:27:33.205245 1804089 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:27:33.205273 1804089 start.go:495] detecting cgroup driver to use...
	I1018 15:27:33.205373 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:27:33.226108 1804089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:27:33.246616 1804089 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:27:33.246715 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:27:33.269113 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:27:33.289669 1804089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:27:33.455092 1804089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:27:33.687419 1804089 docker.go:234] disabling docker service ...
	I1018 15:27:33.687507 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:27:33.705948 1804089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:27:33.729314 1804089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:27:33.912416 1804089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:27:34.076011 1804089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:27:34.094033 1804089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:27:34.123890 1804089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:27:34.123952 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.140864 1804089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 15:27:34.140956 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.156274 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.173832 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.188468 1804089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:27:34.205042 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.219960 1804089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.246507 1804089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:27:34.262043 1804089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:27:34.274149 1804089 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 15:27:34.274213 1804089 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 15:27:34.301602 1804089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:27:34.322941 1804089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:34.495220 1804089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:27:34.613259 1804089 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:27:34.613366 1804089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:27:34.620365 1804089 start.go:563] Will wait 60s for crictl version
	I1018 15:27:34.620448 1804089 ssh_runner.go:195] Run: which crictl
	I1018 15:27:34.625474 1804089 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 15:27:34.671923 1804089 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 15:27:34.672021 1804089 ssh_runner.go:195] Run: crio --version
	I1018 15:27:34.705835 1804089 ssh_runner.go:195] Run: crio --version
	I1018 15:27:34.742448 1804089 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 15:27:34.743772 1804089 main.go:141] libmachine: (auto-320866) Calling .GetIP
	I1018 15:27:34.747306 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:34.747780 1804089 main.go:141] libmachine: (auto-320866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:b9:cb", ip: ""} in network mk-auto-320866: {Iface:virbr1 ExpiryTime:2025-10-18 16:27:29 +0000 UTC Type:0 Mac:52:54:00:f3:b9:cb Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:auto-320866 Clientid:01:52:54:00:f3:b9:cb}
	I1018 15:27:34.747808 1804089 main.go:141] libmachine: (auto-320866) DBG | domain auto-320866 has defined IP address 192.168.39.149 and MAC address 52:54:00:f3:b9:cb in network mk-auto-320866
	I1018 15:27:34.748223 1804089 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 15:27:34.754278 1804089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:27:34.772633 1804089 kubeadm.go:883] updating cluster {Name:auto-320866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:27:34.772779 1804089 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:27:34.772852 1804089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:34.817485 1804089 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 15:27:34.817578 1804089 ssh_runner.go:195] Run: which lz4
	I1018 15:27:34.822795 1804089 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 15:27:34.829171 1804089 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 15:27:34.829207 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 15:27:36.739614 1804089 crio.go:462] duration metric: took 1.916873546s to copy over tarball
	I1018 15:27:36.739706 1804089 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W1018 15:27:35.256007 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	W1018 15:27:37.755592 1803906 pod_ready.go:104] pod "etcd-pause-153767" is not "Ready", error: <nil>
	I1018 15:27:38.490838 1804089 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.751094877s)
	I1018 15:27:38.490879 1804089 crio.go:469] duration metric: took 1.751221412s to extract the tarball
	I1018 15:27:38.490892 1804089 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 15:27:38.535563 1804089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:27:38.585531 1804089 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:27:38.585579 1804089 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:27:38.585590 1804089 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.34.1 crio true true} ...
	I1018 15:27:38.585752 1804089 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-320866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:27:38.585888 1804089 ssh_runner.go:195] Run: crio config
	I1018 15:27:38.649069 1804089 cni.go:84] Creating CNI manager for ""
	I1018 15:27:38.649097 1804089 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 15:27:38.649117 1804089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:27:38.649140 1804089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-320866 NodeName:auto-320866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:27:38.649262 1804089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-320866"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.149"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:27:38.649360 1804089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:27:38.664057 1804089 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:27:38.664147 1804089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:27:38.681451 1804089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 15:27:38.712362 1804089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:27:38.742102 1804089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 15:27:38.768948 1804089 ssh_runner.go:195] Run: grep 192.168.39.149	control-plane.minikube.internal$ /etc/hosts
	I1018 15:27:38.775124 1804089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:27:38.797532 1804089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:27:38.964464 1804089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:27:39.013851 1804089 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866 for IP: 192.168.39.149
	I1018 15:27:39.013891 1804089 certs.go:195] generating shared ca certs ...
	I1018 15:27:39.013916 1804089 certs.go:227] acquiring lock for ca certs: {Name:mk20fae4d22bb4937e66ac0eaa52c1608fa22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.014130 1804089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key
	I1018 15:27:39.014194 1804089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key
	I1018 15:27:39.014208 1804089 certs.go:257] generating profile certs ...
	I1018 15:27:39.014282 1804089 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key
	I1018 15:27:39.014303 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt with IP's: []
	I1018 15:27:39.200100 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt ...
	I1018 15:27:39.200138 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: {Name:mka7d440c592c7c10bc0b3c3bb53a1b06d125246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.200390 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key ...
	I1018 15:27:39.200410 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.key: {Name:mkc87a82c0f3aa5dc9da51162b0d987c2c458895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.200546 1804089 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0
	I1018 15:27:39.200570 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149]
	I1018 15:27:39.392258 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 ...
	I1018 15:27:39.392290 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0: {Name:mk08b05c19f618281ce00fe2e4927159dcb4b2d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.392481 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0 ...
	I1018 15:27:39.392495 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0: {Name:mk36ce21393ce9885e30fa3f6b117483c2f44248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.392571 1804089 certs.go:382] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt.6800b0e0 -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt
	I1018 15:27:39.392670 1804089 certs.go:386] copying /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key.6800b0e0 -> /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key
	I1018 15:27:39.392725 1804089 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key
	I1018 15:27:39.392740 1804089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt with IP's: []
	I1018 15:27:39.447183 1804089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt ...
	I1018 15:27:39.447216 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt: {Name:mk6680622925c699f8a2a2271a91a1a7fede3aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.447398 1804089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key ...
	I1018 15:27:39.447410 1804089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key: {Name:mk3bb3acfcab493ec7cdf1e8c83831c98160e0a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:27:39.447589 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem (1338 bytes)
	W1018 15:27:39.447624 1804089 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792_empty.pem, impossibly tiny 0 bytes
	I1018 15:27:39.447636 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 15:27:39.447656 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:27:39.447677 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:27:39.447701 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/key.pem (1675 bytes)
	I1018 15:27:39.447748 1804089 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem (1708 bytes)
	I1018 15:27:39.448413 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:27:39.482647 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:27:39.522855 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:27:39.557777 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 15:27:39.592579 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 15:27:39.626891 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:27:39.660802 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:27:39.700382 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:27:39.744034 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:27:39.790607 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/certs/1759792.pem --> /usr/share/ca-certificates/1759792.pem (1338 bytes)
	I1018 15:27:39.824608 1804089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/ssl/certs/17597922.pem --> /usr/share/ca-certificates/17597922.pem (1708 bytes)
	I1018 15:27:39.859123 1804089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:27:39.886795 1804089 ssh_runner.go:195] Run: openssl version
	I1018 15:27:39.894400 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:27:39.910131 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.916370 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:09 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.916473 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:27:39.924990 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:27:39.939836 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1759792.pem && ln -fs /usr/share/ca-certificates/1759792.pem /etc/ssl/certs/1759792.pem"
	I1018 15:27:39.959201 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.965952 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:22 /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.966032 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1759792.pem
	I1018 15:27:39.973932 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1759792.pem /etc/ssl/certs/51391683.0"
	I1018 15:27:39.988893 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17597922.pem && ln -fs /usr/share/ca-certificates/17597922.pem /etc/ssl/certs/17597922.pem"
	I1018 15:27:40.004371 1804089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.010613 1804089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:22 /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.010678 1804089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17597922.pem
	I1018 15:27:40.020729 1804089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17597922.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:27:40.036651 1804089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:27:40.042254 1804089 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:27:40.042336 1804089 kubeadm.go:400] StartCluster: {Name:auto-320866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-320866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:27:40.042439 1804089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:27:40.042493 1804089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:27:40.090565 1804089 cri.go:89] found id: ""
	I1018 15:27:40.090659 1804089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:27:40.104405 1804089 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:27:40.118413 1804089 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:27:40.132287 1804089 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:27:40.132316 1804089 kubeadm.go:157] found existing configuration files:
	
	I1018 15:27:40.132380 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:27:40.146747 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:27:40.146824 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:27:40.160242 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:27:40.176021 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:27:40.176108 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:27:40.193586 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:27:40.207559 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:27:40.207642 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:27:40.223303 1804089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:27:40.236077 1804089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:27:40.236157 1804089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:27:40.251952 1804089 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 15:27:40.316528 1804089 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:27:40.316604 1804089 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:27:40.434690 1804089 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:27:40.434902 1804089 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:27:40.435054 1804089 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:27:40.459518 1804089 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:27:39.756391 1803906 pod_ready.go:94] pod "etcd-pause-153767" is "Ready"
	I1018 15:27:39.756427 1803906 pod_ready.go:86] duration metric: took 11.008679807s for pod "etcd-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.759320 1803906 pod_ready.go:83] waiting for pod "kube-apiserver-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.765926 1803906 pod_ready.go:94] pod "kube-apiserver-pause-153767" is "Ready"
	I1018 15:27:39.765962 1803906 pod_ready.go:86] duration metric: took 6.601881ms for pod "kube-apiserver-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:39.768524 1803906 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.014968 1803906 pod_ready.go:94] pod "kube-controller-manager-pause-153767" is "Ready"
	I1018 15:27:41.015003 1803906 pod_ready.go:86] duration metric: took 1.246451729s for pod "kube-controller-manager-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.018318 1803906 pod_ready.go:83] waiting for pod "kube-proxy-nk7dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.024126 1803906 pod_ready.go:94] pod "kube-proxy-nk7dv" is "Ready"
	I1018 15:27:41.024159 1803906 pod_ready.go:86] duration metric: took 5.802503ms for pod "kube-proxy-nk7dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.026900 1803906 pod_ready.go:83] waiting for pod "kube-scheduler-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.031844 1803906 pod_ready.go:94] pod "kube-scheduler-pause-153767" is "Ready"
	I1018 15:27:41.031879 1803906 pod_ready.go:86] duration metric: took 4.943147ms for pod "kube-scheduler-pause-153767" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:27:41.031894 1803906 pod_ready.go:40] duration metric: took 14.806223688s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:27:41.086118 1803906 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:27:41.177844 1803906 out.go:179] * Done! kubectl is now configured to use "pause-153767" cluster and "default" namespace by default
	I1018 15:27:40.686597 1804089 out.go:252]   - Generating certificates and keys ...
	I1018 15:27:40.686734 1804089 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:27:40.686892 1804089 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:27:40.687025 1804089 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	
	
	==> CRI-O <==
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.726130641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b3990d-ceef-4966-91b9-cc33088d1189 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.726393193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b3990d-ceef-4966-91b9-cc33088d1189 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.787418245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50f37b3f-5bee-401c-8786-44031da81d80 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.787513009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50f37b3f-5bee-401c-8786-44031da81d80 name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.790634541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3684ca02-e5ed-4b7c-8d9a-89fb3648dd90 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.791170709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801264791143285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3684ca02-e5ed-4b7c-8d9a-89fb3648dd90 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.792078616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b53c5351-5a83-401e-a582-aef1ab98c950 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.792542769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b53c5351-5a83-401e-a582-aef1ab98c950 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.793217556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b53c5351-5a83-401e-a582-aef1ab98c950 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.846364145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4afc69ba-2e61-41f7-85c6-df277364fb6a name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.846451484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4afc69ba-2e61-41f7-85c6-df277364fb6a name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.852432694Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=43585321-460d-41ae-b5ce-866ecad68b15 name=/runtime.v1.RuntimeService/Status
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.852769329Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=43585321-460d-41ae-b5ce-866ecad68b15 name=/runtime.v1.RuntimeService/Status
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.853132587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c643cd6a-0db6-409e-b157-763ed307e192 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.853686958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801264853640604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c643cd6a-0db6-409e-b157-763ed307e192 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.854486413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70f6ca2f-6e6b-4aec-a075-8b31e690bea9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.854758165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70f6ca2f-6e6b-4aec-a075-8b31e690bea9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.855391015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70f6ca2f-6e6b-4aec-a075-8b31e690bea9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.916104790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e2dee58-268c-40f6-8ba7-10a816bbdcfe name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.916438908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e2dee58-268c-40f6-8ba7-10a816bbdcfe name=/runtime.v1.RuntimeService/Version
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.918051668Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e74bd41-88d7-4225-93ed-37a4eff298bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.918417145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760801264918397635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e74bd41-88d7-4225-93ed-37a4eff298bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.918991932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65cfffc2-eb56-40d3-8711-f29548864a70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.919088091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65cfffc2-eb56-40d3-8711-f29548864a70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 15:27:44 pause-153767 crio[3323]: time="2025-10-18 15:27:44.919408991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f,PodSandboxId:fd49bed045f2121e1bffed9753b08d14189d5271003239da9390b4b864de23f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760801244041919702,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500,PodSandboxId:e4a4c1ff540678a7eecdd5b6742a5a945c87ab662295f3aae247480ac5baf728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760801244021150038,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d,PodSandboxId:dc2e696af7db1bba846fad40943e731d3c665fcdafa20e4adc74247e1c0e319f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760801240282027384,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e,PodSandboxId:7f795ccd6e6a1c8c6aba1349c95f40b0f935c3a466ddf50cfbcd9f4173115511,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,Creat
edAt:1760801240233255038,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25,PodSandboxId:602f1c56f8fbbd614dae0ece9ac878c2a4719293ce8d068c20237650dbdb69fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760801240279772452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2,PodSandboxId:b0dcbe14650b883c8b37a8019c368d923bd32775dc147639bda828f8bb769463,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760801240193616801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb,PodSandboxId:4856f934ceee0b175d5d3836914db39765a744b2a67836
0b565c778cdd4f4516,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760801233592368997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk7dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bf3faf-25ed-4469-9495-c37a4b55623b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059,PodSandboxId:cef67b7a71988336fb9ea07d02d484f9a587475fa35bb857450d93e1e0a5ef88,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760801233506757700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367319ad47c86ef802bed3f9ea34bf64,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331,PodSandboxId:86dcc162
895b287a63f5fc2b69cf8981fba1a72b86c0d798aba11b15abaebb97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760801233589372219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc30208e669c610c46e609c1377925,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c,PodSandboxId:d1740c9e8fae0754bf25412ea9ada8029ecd027740d410c17498607d2bd1dbef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760801233372970242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a16f738dddf4cf0754808042f73e183,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13,PodSandboxId:866056d445c9bd595fc4e8d47cdc49f83b5088dcd648fa78c73b41adcf3dd72d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760801233288115839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-153767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740a1cc4f8df7ba2aa7832e64e9753e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026,PodSandboxId:211f39e68c4ad62c2fa1abd7dc0da9199f04b7dfad48f20f7286c9a61eb59150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760801184756999213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2ztp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28f3cfe-ccea-418b-9644-100bb187e0ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65cfffc2-eb56-40d3-8711-f29548864a70 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	21a568cafd8fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago       Running             coredns                   1                   fd49bed045f21       coredns-66bc5c9577-2ztp2
	bea487e2380ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   20 seconds ago       Running             kube-proxy                2                   e4a4c1ff54067       kube-proxy-nk7dv
	b5466f1480dc7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   24 seconds ago       Running             etcd                      2                   dc2e696af7db1       etcd-pause-153767
	4d47c959b5afb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago       Running             kube-apiserver            2                   602f1c56f8fbb       kube-apiserver-pause-153767
	866b3859d3435       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago       Running             kube-scheduler            2                   7f795ccd6e6a1       kube-scheduler-pause-153767
	c9cecb5df4eb4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago       Running             kube-controller-manager   2                   b0dcbe14650b8       kube-controller-manager-pause-153767
	7c34cce37ad90       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   31 seconds ago       Exited              kube-proxy                1                   4856f934ceee0       kube-proxy-nk7dv
	6def6b6b8e277       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   31 seconds ago       Exited              kube-apiserver            1                   86dcc162895b2       kube-apiserver-pause-153767
	ab6f334e8dc36       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   31 seconds ago       Exited              etcd                      1                   cef67b7a71988       etcd-pause-153767
	2867fda4fb320       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   31 seconds ago       Exited              kube-controller-manager   1                   d1740c9e8fae0       kube-controller-manager-pause-153767
	cb528bc32dbb1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   31 seconds ago       Exited              kube-scheduler            1                   866056d445c9b       kube-scheduler-pause-153767
	5b5b5aaf2a532       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   211f39e68c4ad       coredns-66bc5c9577-2ztp2
	
	
	==> coredns [21a568cafd8fca8bdd14531095c7015e724c24bbd085114951c559efb285489f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40230 - 31640 "HINFO IN 3268297221603888393.2832899248348407764. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.086159951s
	
	
	==> coredns [5b5b5aaf2a5329d2967c6a33790286ccc71f927089704cce0272c4edbe7f1026] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59356 - 43908 "HINFO IN 6043093980954816288.5690242199728248812. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110759548s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-153767
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-153767
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-153767
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_26_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:26:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-153767
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:27:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:27:23 +0000   Sat, 18 Oct 2025 15:26:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.16
	  Hostname:    pause-153767
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc144a18922648ed9cfb63f6115e22f7
	  System UUID:                fc144a18-9226-48ed-9cfb-63f6115e22f7
	  Boot ID:                    3df6d59d-e4a1-4c3a-8504-85f1d554a509
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2ztp2                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     82s
	  kube-system                 etcd-pause-153767                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         90s
	  kube-system                 kube-apiserver-pause-153767             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-153767    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-nk7dv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-153767             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 95s)  kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s                kubelet          Node pause-153767 status is now: NodeReady
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           83s                node-controller  Node pause-153767 event: Registered Node pause-153767 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-153767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-153767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-153767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node pause-153767 event: Registered Node pause-153767 in Controller
	
	
	==> dmesg <==
	[Oct18 15:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000063] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002382] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.202574] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090250] kauditd_printk_skb: 1 callbacks suppressed
	[Oct18 15:26] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.124465] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.173081] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.358488] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.672523] kauditd_printk_skb: 218 callbacks suppressed
	[Oct18 15:27] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.173109] kauditd_printk_skb: 410 callbacks suppressed
	[  +4.709866] kauditd_printk_skb: 112 callbacks suppressed
	
	
	==> etcd [ab6f334e8dc36521935cdc9d4f5c25a21972c81a86da9a57acc28da27a052059] <==
	
	
	==> etcd [b5466f1480dc7718ecdada10117683d12668e8a7df3e23c2d59cd6aafbe1b45d] <==
	{"level":"warn","ts":"2025-10-18T15:27:22.657941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.676037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.699299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.721057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.735013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.753711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.772623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.788094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:27:22.862962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:27:40.199723Z","caller":"traceutil/trace.go:172","msg":"trace[1769768384] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"394.074399ms","start":"2025-10-18T15:27:39.805629Z","end":"2025-10-18T15:27:40.199704Z","steps":["trace[1769768384] 'process raft request'  (duration: 393.917731ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:40.200445Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:39.805605Z","time spent":"394.304606ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6605,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" mod_revision:449 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" value_size:6534 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" > >"}
	{"level":"warn","ts":"2025-10-18T15:27:40.745118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.021438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" limit:1 ","response":"range_response_count:1 size:6620"}
	{"level":"info","ts":"2025-10-18T15:27:40.745206Z","caller":"traceutil/trace.go:172","msg":"trace[1702561045] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-153767; range_end:; response_count:1; response_revision:526; }","duration":"481.12461ms","start":"2025-10-18T15:27:40.264068Z","end":"2025-10-18T15:27:40.745193Z","steps":["trace[1702561045] 'range keys from in-memory index tree'  (duration: 480.960522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:40.745238Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:40.264048Z","time spent":"481.18223ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":6643,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-153767\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T15:27:40.745445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"451.205318ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:40.745470Z","caller":"traceutil/trace.go:172","msg":"trace[803117120] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:526; }","duration":"451.23271ms","start":"2025-10-18T15:27:40.294230Z","end":"2025-10-18T15:27:40.745463Z","steps":["trace[803117120] 'range keys from in-memory index tree'  (duration: 451.179215ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"324.685498ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5902136601809966205 > lease_revoke:<id:51e899f7eeb6d32d>","response":"size:28"}
	{"level":"info","ts":"2025-10-18T15:27:41.004118Z","caller":"traceutil/trace.go:172","msg":"trace[1460180529] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"300.967113ms","start":"2025-10-18T15:27:40.703141Z","end":"2025-10-18T15:27:41.004108Z","steps":["trace[1460180529] 'read index received'  (duration: 27.229µs)","trace[1460180529] 'applied index is now lower than readState.Index'  (duration: 300.939202ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:27:41.004216Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.092477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:41.004230Z","caller":"traceutil/trace.go:172","msg":"trace[601817914] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:526; }","duration":"301.113352ms","start":"2025-10-18T15:27:40.703112Z","end":"2025-10-18T15:27:41.004225Z","steps":["trace[601817914] 'agreement among raft nodes before linearized reading'  (duration: 301.072786ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004249Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T15:27:40.703096Z","time spent":"301.148665ms","remote":"127.0.0.1:53168","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-18T15:27:41.004425Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.927262ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:27:41.004440Z","caller":"traceutil/trace.go:172","msg":"trace[237818036] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:526; }","duration":"258.943584ms","start":"2025-10-18T15:27:40.745492Z","end":"2025-10-18T15:27:41.004436Z","steps":["trace[237818036] 'agreement among raft nodes before linearized reading'  (duration: 258.919132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T15:27:41.004797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.361745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-153767\" limit:1 ","response":"range_response_count:1 size:5279"}
	{"level":"info","ts":"2025-10-18T15:27:41.004918Z","caller":"traceutil/trace.go:172","msg":"trace[738410035] range","detail":"{range_begin:/registry/minions/pause-153767; range_end:; response_count:1; response_revision:526; }","duration":"255.485855ms","start":"2025-10-18T15:27:40.749420Z","end":"2025-10-18T15:27:41.004906Z","steps":["trace[738410035] 'agreement among raft nodes before linearized reading'  (duration: 255.243828ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:27:45 up 2 min,  0 users,  load average: 1.15, 0.48, 0.18
	Linux pause-153767 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4d47c959b5afb3bd0512ac7716dce6a3ce20efe46488b887d0e822c09ed3fd25] <==
	I1018 15:27:23.637801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 15:27:23.637967       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:27:23.638072       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:27:23.638124       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:27:23.655508       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:27:23.655688       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:27:23.655771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:27:23.655875       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:27:23.663932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:27:23.664312       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:27:23.694394       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:27:23.704418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:27:23.704510       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 15:27:23.704528       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 15:27:23.704547       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 15:27:23.710063       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:27:23.772642       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:27:24.498375       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:27:25.672456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:27:25.758664       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:27:25.820678       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:27:25.832449       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:27:27.065036       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:27:27.321585       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:27:27.365020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [6def6b6b8e27732be2161935ee97726ca9a087ca6cf6eb45ce93002e5be07331] <==
	
	
	==> kube-controller-manager [2867fda4fb320f98bf6f1718fcdfc07eb29558e3d79aafb6c7a061e3085ccf7c] <==
	
	
	==> kube-controller-manager [c9cecb5df4eb46678d63bbbb832efea6a3954498e9c3d34ed051e7659d3754b2] <==
	I1018 15:27:26.972241       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:27:26.976970       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:27:26.980213       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:27:26.981551       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:27:26.989282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:27:26.991463       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:27:26.998244       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:27:27.000644       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:27:27.004125       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:27:27.004277       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:27:27.004342       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:27:27.011302       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 15:27:27.011463       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 15:27:27.011397       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:27:27.011414       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:27:27.012339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:27:27.012589       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:27:27.011384       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:27:27.012738       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 15:27:27.013911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 15:27:27.016137       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:27:27.019424       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:27:27.019581       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:27:27.022944       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:27:27.027276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb] <==
	
	
	==> kube-proxy [bea487e2380ff22474d8b53ac0c150513df07dad107cccdd078e760b889d4500] <==
	I1018 15:27:24.270081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:27:24.371113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:27:24.371154       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.16"]
	E1018 15:27:24.371240       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:27:24.418174       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 15:27:24.418389       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 15:27:24.418460       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:27:24.431400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:27:24.431869       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:27:24.431968       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:27:24.437682       1 config.go:200] "Starting service config controller"
	I1018 15:27:24.437723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:27:24.437740       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:27:24.437744       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:27:24.437763       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:27:24.437767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:27:24.438606       1 config.go:309] "Starting node config controller"
	I1018 15:27:24.438640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:27:24.438647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:27:24.539558       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:27:24.539626       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:27:24.539666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [866b3859d3435292ba26b8279d82fec285b774e2995d5bc2d6fbe526ba01541e] <==
	I1018 15:27:21.766380       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:27:23.536973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:27:23.537019       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:27:23.537036       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:27:23.537043       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:27:23.610754       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:27:23.611019       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:27:23.620945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:27:23.631255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:27:23.634767       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:27:23.632976       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:27:23.735734       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [cb528bc32dbb1002be0cffac24ce673dfd1509c69e86dc426a4e7871f3e89a13] <==
	
	
	==> kubelet <==
	Oct 18 15:27:21 pause-153767 kubelet[3971]: E1018 15:27:21.018095    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:21 pause-153767 kubelet[3971]: E1018 15:27:21.027650    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:21 pause-153767 kubelet[3971]: I1018 15:27:21.352696    3971 kubelet_node_status.go:75] "Attempting to register node" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.030149    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.030549    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.031987    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:22 pause-153767 kubelet[3971]: E1018 15:27:22.034095    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.036717    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.038103    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: E1018 15:27:23.038899    3971 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-153767\" not found" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.685215    3971 apiserver.go:52] "Watching apiserver"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691464    3971 kubelet_node_status.go:124] "Node was previously registered" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691550    3971 kubelet_node_status.go:78] "Successfully registered node" node="pause-153767"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.691582    3971 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.694915    3971 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.718587    3971 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.757309    3971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95bf3faf-25ed-4469-9495-c37a4b55623b-xtables-lock\") pod \"kube-proxy-nk7dv\" (UID: \"95bf3faf-25ed-4469-9495-c37a4b55623b\") " pod="kube-system/kube-proxy-nk7dv"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.757471    3971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95bf3faf-25ed-4469-9495-c37a4b55623b-lib-modules\") pod \"kube-proxy-nk7dv\" (UID: \"95bf3faf-25ed-4469-9495-c37a4b55623b\") " pod="kube-system/kube-proxy-nk7dv"
	Oct 18 15:27:23 pause-153767 kubelet[3971]: I1018 15:27:23.996132    3971 scope.go:117] "RemoveContainer" containerID="7c34cce37ad9085a41dd300f3b2ca1aac9a93bb413c9b916ea9373feb06538cb"
	Oct 18 15:27:26 pause-153767 kubelet[3971]: I1018 15:27:26.078247    3971 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:27:28 pause-153767 kubelet[3971]: I1018 15:27:28.677707    3971 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:27:29 pause-153767 kubelet[3971]: E1018 15:27:29.846270    3971 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760801249845493092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:29 pause-153767 kubelet[3971]: E1018 15:27:29.846294    3971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760801249845493092  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:39 pause-153767 kubelet[3971]: E1018 15:27:39.850469    3971 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760801259849360985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 15:27:39 pause-153767 kubelet[3971]: E1018 15:27:39.850619    3971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760801259849360985  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-153767 -n pause-153767
helpers_test.go:269: (dbg) Run:  kubectl --context pause-153767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (42.43s)

                                                
                                    

Test pass (270/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.97
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.28
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
22 TestOffline 80.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 141.79
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 8.59
36 TestAddons/parallel/RegistryCreds 0.81
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 5.86
42 TestAddons/parallel/Headlamp 99.03
43 TestAddons/parallel/CloudSpanner 6.8
45 TestAddons/parallel/NvidiaDevicePlugin 6.6
46 TestAddons/parallel/Yakd 11.88
48 TestAddons/StoppedEnableDisable 81.42
49 TestCertOptions 70.29
50 TestCertExpiration 281.22
52 TestForceSystemdFlag 68.82
53 TestForceSystemdEnv 44.38
55 TestKVMDriverInstallOrUpdate 1.15
59 TestErrorSpam/setup 40.08
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.97
64 TestErrorSpam/stop 4.84
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 54.95
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.83
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
76 TestFunctional/serial/CacheCmd/cache/add_local 1.56
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 34.09
85 TestFunctional/serial/ComponentHealth 0.08
86 TestFunctional/serial/LogsCmd 1.62
87 TestFunctional/serial/LogsFileCmd 1.64
88 TestFunctional/serial/InvalidService 4.27
90 TestFunctional/parallel/ConfigCmd 0.39
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.81
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.44
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.44
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
114 TestFunctional/parallel/License 0.39
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
129 TestFunctional/parallel/ProfileCmd/profile_list 0.34
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
131 TestFunctional/parallel/MountCmd/any-port 99.52
132 TestFunctional/parallel/MountCmd/specific-port 1.85
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.11
134 TestFunctional/parallel/ServiceCmd/List 1.3
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 0.76
137 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
142 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
143 TestFunctional/parallel/ImageCommands/Setup 1.04
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.76
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 216.78
162 TestMultiControlPlane/serial/DeployApp 5.58
163 TestMultiControlPlane/serial/PingHostFromPods 1.34
164 TestMultiControlPlane/serial/AddWorkerNode 46.97
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 13.91
168 TestMultiControlPlane/serial/StopSecondaryNode 86.41
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.36
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.7
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.02
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 245.92
176 TestMultiControlPlane/serial/RestartCluster 105.91
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
178 TestMultiControlPlane/serial/AddSecondaryNode 110.52
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 82.98
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.71
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.05
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 88.88
215 TestMountStart/serial/StartWithMountFirst 23.37
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 24.24
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.74
220 TestMountStart/serial/VerifyMountPostDelete 0.4
221 TestMountStart/serial/Stop 1.33
222 TestMountStart/serial/RestartStopped 20.57
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 132.14
227 TestMultiNode/serial/DeployApp2Nodes 4.34
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 42.71
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.63
232 TestMultiNode/serial/CopyFile 7.72
233 TestMultiNode/serial/StopNode 2.61
234 TestMultiNode/serial/StartAfterStop 40.47
235 TestMultiNode/serial/RestartKeepsNodes 312.56
236 TestMultiNode/serial/DeleteNode 2.85
237 TestMultiNode/serial/StopMultiNode 171.81
238 TestMultiNode/serial/RestartMultiNode 118.46
239 TestMultiNode/serial/ValidateNameConflict 44.92
246 TestScheduledStopUnix 110.71
250 TestRunningBinaryUpgrade 163.52
252 TestKubernetesUpgrade 149.26
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 85.41
264 TestNetworkPlugins/group/false 4.72
268 TestNoKubernetes/serial/StartWithStopK8s 51.84
269 TestNoKubernetes/serial/Start 44.87
271 TestPause/serial/Start 127.16
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
273 TestNoKubernetes/serial/ProfileList 1.17
274 TestNoKubernetes/serial/Stop 1.38
275 TestNoKubernetes/serial/StartNoArgs 58.14
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
284 TestStoppedBinaryUpgrade/Setup 0.46
285 TestStoppedBinaryUpgrade/Upgrade 112.81
287 TestNetworkPlugins/group/auto/Start 86.68
288 TestNetworkPlugins/group/kindnet/Start 70.32
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
290 TestNetworkPlugins/group/calico/Start 75.36
291 TestNetworkPlugins/group/auto/KubeletFlags 0.27
292 TestNetworkPlugins/group/auto/NetCatPod 12.33
293 TestNetworkPlugins/group/auto/DNS 0.17
294 TestNetworkPlugins/group/auto/Localhost 0.17
295 TestNetworkPlugins/group/auto/HairPin 0.17
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/custom-flannel/Start 84.54
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
299 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
300 TestNetworkPlugins/group/enable-default-cni/Start 79.57
301 TestNetworkPlugins/group/kindnet/DNS 0.21
302 TestNetworkPlugins/group/kindnet/Localhost 0.17
303 TestNetworkPlugins/group/kindnet/HairPin 0.17
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/calico/KubeletFlags 0.58
306 TestNetworkPlugins/group/calico/NetCatPod 12.31
307 TestNetworkPlugins/group/flannel/Start 90.75
308 TestNetworkPlugins/group/calico/DNS 0.19
309 TestNetworkPlugins/group/calico/Localhost 0.19
310 TestNetworkPlugins/group/calico/HairPin 0.16
311 TestNetworkPlugins/group/bridge/Start 93.83
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.31
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
316 TestNetworkPlugins/group/custom-flannel/DNS 0.21
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.28
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
323 TestStartStop/group/old-k8s-version/serial/FirstStart 97.08
325 TestStartStop/group/embed-certs/serial/FirstStart 112.04
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
328 TestNetworkPlugins/group/flannel/NetCatPod 10.29
329 TestNetworkPlugins/group/flannel/DNS 0.21
330 TestNetworkPlugins/group/flannel/Localhost 0.19
331 TestNetworkPlugins/group/flannel/HairPin 0.21
333 TestStartStop/group/no-preload/serial/FirstStart 109.16
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
335 TestNetworkPlugins/group/bridge/NetCatPod 12.32
336 TestNetworkPlugins/group/bridge/DNS 0.23
337 TestNetworkPlugins/group/bridge/Localhost 0.15
338 TestNetworkPlugins/group/bridge/HairPin 0.19
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.17
341 TestStartStop/group/old-k8s-version/serial/DeployApp 8.69
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
343 TestStartStop/group/old-k8s-version/serial/Stop 90.48
344 TestStartStop/group/embed-certs/serial/DeployApp 9.35
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
346 TestStartStop/group/embed-certs/serial/Stop 71.22
347 TestStartStop/group/no-preload/serial/DeployApp 8.3
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
349 TestStartStop/group/no-preload/serial/Stop 88.6
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.17
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
354 TestStartStop/group/embed-certs/serial/SecondStart 48
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/old-k8s-version/serial/SecondStart 60.69
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/no-preload/serial/SecondStart 61.71
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
361 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
362 TestStartStop/group/embed-certs/serial/Pause 3.18
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.06
367 TestStartStop/group/newest-cni/serial/FirstStart 75.7
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
370 TestStartStop/group/old-k8s-version/serial/Pause 3.31
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
375 TestStartStop/group/no-preload/serial/Pause 3.1
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
381 TestStartStop/group/newest-cni/serial/Stop 10.98
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
383 TestStartStop/group/newest-cni/serial/SecondStart 35.71
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
387 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.28.0/json-events (6.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.973034471s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 14:08:33.279510 1759792 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 14:08:33.279679 1759792 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-031579
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-031579: exit status 85 (72.576149ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:26.353014 1759804 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:26.353360 1759804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:26.353371 1759804 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:26.353379 1759804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:26.353608 1759804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	W1018 14:08:26.353781 1759804 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-1755824/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-1755824/.minikube/config/config.json: no such file or directory
	I1018 14:08:26.354294 1759804 out.go:368] Setting JSON to true
	I1018 14:08:26.355375 1759804 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21054,"bootTime":1760775452,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:26.355475 1759804 start.go:141] virtualization: kvm guest
	I1018 14:08:26.357944 1759804 out.go:99] [download-only-031579] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 14:08:26.358130 1759804 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 14:08:26.358166 1759804 notify.go:220] Checking for updates...
	I1018 14:08:26.359833 1759804 out.go:171] MINIKUBE_LOCATION=21409
	I1018 14:08:26.361435 1759804 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:26.362849 1759804 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:26.364362 1759804 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:26.365923 1759804 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 14:08:26.368677 1759804 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 14:08:26.369010 1759804 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:08:26.400769 1759804 out.go:99] Using the kvm2 driver based on user configuration
	I1018 14:08:26.400838 1759804 start.go:305] selected driver: kvm2
	I1018 14:08:26.400849 1759804 start.go:925] validating driver "kvm2" against <nil>
	I1018 14:08:26.401202 1759804 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:26.401327 1759804 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:26.416075 1759804 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:26.416117 1759804 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1755824/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 14:08:26.430819 1759804 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 14:08:26.430878 1759804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:08:26.431512 1759804 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 14:08:26.431701 1759804 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 14:08:26.431734 1759804 cni.go:84] Creating CNI manager for ""
	I1018 14:08:26.431807 1759804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 14:08:26.431821 1759804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 14:08:26.431895 1759804 start.go:349] cluster config:
	{Name:download-only-031579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-031579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:08:26.432114 1759804 iso.go:125] acquiring lock: {Name:mk7faf1d3c636cdbb2becc20102b665984151b51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:08:26.434250 1759804 out.go:99] Downloading VM boot image ...
	I1018 14:08:26.434305 1759804 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 14:08:30.183958 1759804 out.go:99] Starting "download-only-031579" primary control-plane node in "download-only-031579" cluster
	I1018 14:08:30.184000 1759804 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 14:08:30.208250 1759804 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 14:08:30.208308 1759804 cache.go:58] Caching tarball of preloaded images
	I1018 14:08:30.208507 1759804 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 14:08:30.210411 1759804 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 14:08:30.210443 1759804 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 14:08:30.236137 1759804 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 14:08:30.236273 1759804 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-031579 host does not exist
	  To start a cluster, run: "minikube start -p download-only-031579"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-031579
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3.283431108s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 14:08:36.949284 1759792 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 14:08:36.949338 1759792 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-1755824/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-398489
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-398489: exit status 85 (70.525306ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-031579 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ delete  │ -p download-only-031579                                                                                                                                                                             │ download-only-031579 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │ 18 Oct 25 14:08 UTC │
	│ start   │ -o=json --download-only -p download-only-398489 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-398489 │ jenkins │ v1.37.0 │ 18 Oct 25 14:08 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:08:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:08:33.710311 1760011 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:08:33.710591 1760011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:33.710600 1760011 out.go:374] Setting ErrFile to fd 2...
	I1018 14:08:33.710604 1760011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:08:33.710824 1760011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:08:33.711326 1760011 out.go:368] Setting JSON to true
	I1018 14:08:33.712281 1760011 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21062,"bootTime":1760775452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:08:33.712440 1760011 start.go:141] virtualization: kvm guest
	I1018 14:08:33.714407 1760011 out.go:99] [download-only-398489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:08:33.714575 1760011 notify.go:220] Checking for updates...
	I1018 14:08:33.716047 1760011 out.go:171] MINIKUBE_LOCATION=21409
	I1018 14:08:33.717420 1760011 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:08:33.718692 1760011 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:08:33.720046 1760011 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:08:33.721493 1760011 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-398489 host does not exist
	  To start a cluster, run: "minikube start -p download-only-398489"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-398489
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 14:08:37.611944 1759792 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-305392 --alsologtostderr --binary-mirror http://127.0.0.1:39643 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-305392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-305392
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (80.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-459651 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-459651 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.567693399s)
helpers_test.go:175: Cleaning up "offline-crio-459651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-459651
--- PASS: TestOffline (80.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-891059
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-891059: exit status 85 (56.382452ms)

                                                
                                                
-- stdout --
	* Profile "addons-891059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-891059
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-891059: exit status 85 (56.853466ms)

                                                
                                                
-- stdout --
	* Profile "addons-891059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (141.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-891059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m21.790935485s)
--- PASS: TestAddons/Setup (141.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-891059 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-891059 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-891059 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-891059 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75ccff45-9202-4152-b90e-8a5a6d306c7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75ccff45-9202-4152-b90e-8a5a6d306c7d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004901264s
addons_test.go:694: (dbg) Run:  kubectl --context addons-891059 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-891059 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-891059 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.564176ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-891059
addons_test.go:332: (dbg) Run:  kubectl --context addons-891059 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bz8k2" [32f0a88f-aea2-4621-a5b1-df5a3fb86a2b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006731634s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.043906ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zthlp" [23d1a687-8b62-4e3f-be5e-9664ae7f101e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004496409s
addons_test.go:463: (dbg) Run:  kubectl --context addons-891059 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (99.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-891059 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-891059 --alsologtostderr -v=1: (1.090221331s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-67bwz" [fdb7e1d4-852f-4236-9cdf-29089e1285d4] Pending
helpers_test.go:352: "headlamp-6945c6f4d-67bwz" [fdb7e1d4-852f-4236-9cdf-29089e1285d4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-67bwz" [fdb7e1d4-852f-4236-9cdf-29089e1285d4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m32.007755145s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable headlamp --alsologtostderr -v=1: (5.933214984s)
--- PASS: TestAddons/parallel/Headlamp (99.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-mk9xb" [55e01946-62a0-4423-9743-aade2ef744a9] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005780445s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5z8tb" [0e21578d-6373-41a1-aaa9-7c86d80f9c8c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005688347s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xt7jp" [9ff96a54-feef-40f7-883d-557d20da0d77] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00598616s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-891059 addons disable yakd --alsologtostderr -v=1: (5.875961663s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-891059
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-891059: (1m21.114002881s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-891059
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-891059
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-891059
--- PASS: TestAddons/StoppedEnableDisable (81.42s)

                                                
                                    
x
+
TestCertOptions (70.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-155388 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-155388 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.815830642s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-155388 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-155388 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-155388 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-155388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-155388
--- PASS: TestCertOptions (70.29s)

                                                
                                    
x
+
TestCertExpiration (281.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-486593 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:24:24.564088 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-486593 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.805979018s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-486593 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-486593 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.420070104s)
helpers_test.go:175: Cleaning up "cert-expiration-486593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-486593
--- PASS: TestCertExpiration (281.22s)

                                                
                                    
x
+
TestForceSystemdFlag (68.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-261740 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-261740 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.699721512s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-261740 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-261740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-261740
--- PASS: TestForceSystemdFlag (68.82s)

                                                
                                    
x
+
TestForceSystemdEnv (44.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-508259 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-508259 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.263140181s)
helpers_test.go:175: Cleaning up "force-systemd-env-508259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-508259
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-508259: (1.113949402s)
--- PASS: TestForceSystemdEnv (44.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 15:23:10.712754 1759792 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 15:23:10.712906 1759792 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3652478309/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 15:23:10.747201 1759792 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3652478309/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 15:23:10.747242 1759792 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 15:23:10.747388 1759792 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 15:23:10.747440 1759792 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3652478309/001/docker-machine-driver-kvm2
I1018 15:23:11.713541 1759792 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3652478309/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 15:23:11.730691 1759792 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3652478309/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                    
x
+
TestErrorSpam/setup (40.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-750230 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-750230 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-750230 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-750230 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.075412902s)
--- PASS: TestErrorSpam/setup (40.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (4.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop: (2.453244353s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop: (1.201728569s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-750230 --log_dir /tmp/nospam-750230 stop: (1.187769569s)
--- PASS: TestErrorSpam/stop (4.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-1755824/.minikube/files/etc/test/nested/copy/1759792/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-900196 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.945247288s)
--- PASS: TestFunctional/serial/StartWithProxy (54.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 14:22:57.000138 1759792 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-900196 --alsologtostderr -v=8: (37.831042197s)
functional_test.go:678: soft start took 37.831685138s for "functional-900196" cluster.
I1018 14:23:34.831567 1759792 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-900196 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:3.1: (1.152580646s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:3.3: (1.174373529s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 cache add registry.k8s.io/pause:latest: (1.311669149s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-900196 /tmp/TestFunctionalserialCacheCmdcacheadd_local3701804777/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache add minikube-local-cache-test:functional-900196
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 cache add minikube-local-cache-test:functional-900196: (1.210817785s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache delete minikube-local-cache-test:functional-900196
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-900196
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.463319ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 cache reload: (1.048212037s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 kubectl -- --context functional-900196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-900196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-900196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.092241187s)
functional_test.go:776: restart took 34.092388676s for "functional-900196" cluster.
I1018 14:24:16.708529 1759792 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (34.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-900196 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs: (1.61518549s)
--- PASS: TestFunctional/serial/LogsCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 logs --file /tmp/TestFunctionalserialLogsFileCmd930327444/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 logs --file /tmp/TestFunctionalserialLogsFileCmd930327444/001/logs.txt: (1.639317491s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-900196 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-900196
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-900196: exit status 115 (296.783852ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.34:31387 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-900196 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 config get cpus: exit status 14 (66.322974ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 config get cpus: exit status 14 (65.050898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (138.252381ms)

                                                
                                                
-- stdout --
	* [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:32:20.121074 1770850 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:32:20.121318 1770850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.121327 1770850 out.go:374] Setting ErrFile to fd 2...
	I1018 14:32:20.121331 1770850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:20.121561 1770850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:32:20.122020 1770850 out.go:368] Setting JSON to false
	I1018 14:32:20.123012 1770850 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22488,"bootTime":1760775452,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:32:20.123115 1770850 start.go:141] virtualization: kvm guest
	I1018 14:32:20.124911 1770850 out.go:179] * [functional-900196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:32:20.126629 1770850 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:32:20.126630 1770850 notify.go:220] Checking for updates...
	I1018 14:32:20.129310 1770850 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:32:20.130661 1770850 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:32:20.132021 1770850 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:32:20.133069 1770850 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:32:20.134414 1770850 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:32:20.136102 1770850 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:32:20.136498 1770850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.136567 1770850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.151074 1770850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I1018 14:32:20.151540 1770850 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.152079 1770850 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.152108 1770850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.152515 1770850 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.152703 1770850 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.153002 1770850 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:32:20.153310 1770850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.153370 1770850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.167336 1770850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1018 14:32:20.167908 1770850 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.168394 1770850 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.168423 1770850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.168722 1770850 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.168932 1770850 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.202520 1770850 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 14:32:20.203792 1770850 start.go:305] selected driver: kvm2
	I1018 14:32:20.203815 1770850 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.203961 1770850 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:32:20.206301 1770850 out.go:203] 
	W1018 14:32:20.207491 1770850 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 14:32:20.208627 1770850 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-900196 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (140.568729ms)

                                                
                                                
-- stdout --
	* [functional-900196] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:32:19.984129 1770822 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:32:19.984400 1770822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:19.984409 1770822 out.go:374] Setting ErrFile to fd 2...
	I1018 14:32:19.984413 1770822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:32:19.984743 1770822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:32:19.985216 1770822 out.go:368] Setting JSON to false
	I1018 14:32:19.986290 1770822 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22488,"bootTime":1760775452,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:32:19.986413 1770822 start.go:141] virtualization: kvm guest
	I1018 14:32:19.988475 1770822 out.go:179] * [functional-900196] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 14:32:19.989890 1770822 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:32:19.989924 1770822 notify.go:220] Checking for updates...
	I1018 14:32:19.992424 1770822 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:32:19.993954 1770822 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 14:32:19.995482 1770822 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 14:32:19.997018 1770822 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:32:19.998363 1770822 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:32:20.000078 1770822 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:32:20.000567 1770822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.000657 1770822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.014851 1770822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I1018 14:32:20.015328 1770822 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.016094 1770822 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.016126 1770822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.016545 1770822 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.016751 1770822 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.017027 1770822 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:32:20.017371 1770822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:32:20.017422 1770822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:32:20.032717 1770822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I1018 14:32:20.033142 1770822 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:32:20.033670 1770822 main.go:141] libmachine: Using API Version  1
	I1018 14:32:20.033710 1770822 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:32:20.034038 1770822 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:32:20.034237 1770822 main.go:141] libmachine: (functional-900196) Calling .DriverName
	I1018 14:32:20.064594 1770822 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1018 14:32:20.065836 1770822 start.go:305] selected driver: kvm2
	I1018 14:32:20.065853 1770822 start.go:925] validating driver "kvm2" against &{Name:functional-900196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-900196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:32:20.065949 1770822 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:32:20.067993 1770822 out.go:203] 
	W1018 14:32:20.069260 1770822 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 14:32:20.070351 1770822 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh -n functional-900196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cp functional-900196:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2349911493/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh -n functional-900196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh -n functional-900196 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1759792/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /etc/test/nested/copy/1759792/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1759792.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /etc/ssl/certs/1759792.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1759792.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /usr/share/ca-certificates/1759792.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/17597922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /etc/ssl/certs/17597922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/17597922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /usr/share/ca-certificates/17597922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-900196 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "sudo systemctl is-active docker": exit status 1 (223.450368ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "sudo systemctl is-active containerd": exit status 1 (228.538944ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "283.467196ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.687811ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "282.234776ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "54.271409ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (99.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdany-port1051280194/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760797836634853090" to /tmp/TestFunctionalparallelMountCmdany-port1051280194/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760797836634853090" to /tmp/TestFunctionalparallelMountCmdany-port1051280194/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760797836634853090" to /tmp/TestFunctionalparallelMountCmdany-port1051280194/001/test-1760797836634853090
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.212463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:30:36.837410 1759792 retry.go:31] will retry after 598.348168ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 14:30 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 14:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 14:30 test-1760797836634853090
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh cat /mount-9p/test-1760797836634853090
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-900196 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b1c21ed2-b86c-4e19-a613-f6d67149156e] Pending
helpers_test.go:352: "busybox-mount" [b1c21ed2-b86c-4e19-a613-f6d67149156e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1018 14:31:00.845242 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:31:28.549474 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [b1c21ed2-b86c-4e19-a613-f6d67149156e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b1c21ed2-b86c-4e19-a613-f6d67149156e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m37.004154695s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-900196 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdany-port1051280194/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (99.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdspecific-port3223273432/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.859005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:32:16.360779 1759792 retry.go:31] will retry after 656.721287ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdspecific-port3223273432/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "sudo umount -f /mount-9p": exit status 1 (198.638027ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-900196 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdspecific-port3223273432/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T" /mount1: exit status 1 (210.016963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:32:18.223568 1759792 retry.go:31] will retry after 275.633774ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-900196 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3852977713/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 service list: (1.295184054s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 service list -o json: (1.273679514s)
functional_test.go:1504: Took "1.273806373s" to run "out/minikube-linux-amd64 -p functional-900196 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900196 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-900196
localhost/kicbase/echo-server:functional-900196
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900196 image ls --format short --alsologtostderr:
I1018 14:34:36.461197 1772639 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:36.461504 1772639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.461515 1772639 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:36.461521 1772639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.461825 1772639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:34:36.462553 1772639 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.462690 1772639 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.463198 1772639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.463268 1772639 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.477029 1772639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
I1018 14:34:36.477563 1772639 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.478197 1772639 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.478229 1772639 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.478695 1772639 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.478906 1772639 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:34:36.481388 1772639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.481446 1772639 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.497057 1772639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
I1018 14:34:36.497476 1772639 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.497906 1772639 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.497924 1772639 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.498331 1772639 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.498563 1772639 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:34:36.498771 1772639 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:36.498803 1772639 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:34:36.502441 1772639 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.503012 1772639 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:34:36.503047 1772639 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.503373 1772639 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:34:36.503580 1772639 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:34:36.503773 1772639 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:34:36.503948 1772639 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:34:36.584832 1772639 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 14:34:36.661101 1772639 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.661116 1772639 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.661512 1772639 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.661532 1772639 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.661542 1772639 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.661550 1772639 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.661828 1772639 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.661846 1772639 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.661917 1772639 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900196 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/kicbase/echo-server           │ functional-900196  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-900196  │ 06de39b489c39 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900196 image ls --format table --alsologtostderr:
I1018 14:34:36.725434 1772730 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:36.725664 1772730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.725674 1772730 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:36.725677 1772730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.725929 1772730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:34:36.726543 1772730 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.726637 1772730 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.727008 1772730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.727072 1772730 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.744452 1772730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
I1018 14:34:36.744954 1772730 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.746427 1772730 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.746545 1772730 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.748139 1772730 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.748404 1772730 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:34:36.750667 1772730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.750727 1772730 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.769589 1772730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
I1018 14:34:36.770046 1772730 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.770584 1772730 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.770610 1772730 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.771009 1772730 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.771198 1772730 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:34:36.771421 1772730 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:36.771452 1772730 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:34:36.774824 1772730 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.775225 1772730 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:34:36.775253 1772730 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.775414 1772730 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:34:36.775599 1772730 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:34:36.775740 1772730 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:34:36.775912 1772730 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:34:36.864650 1772730 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 14:34:36.930997 1772730 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.931015 1772730 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.931285 1772730 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.931301 1772730 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.931327 1772730 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.931330 1772730 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
I1018 14:34:36.931352 1772730 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.931616 1772730 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.931632 1772730 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900196 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c
82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"06de39b489c39faa38cfd2d72e1369d8ccfe772bca031140ca8d53fece378f9e","repoDigests":["localhost/minikube-local-cache-test@sha256:f544d5580a4ea9f37c92ae85a2e0f740f746475b38be6510682933ec928f8b98"],"repoTags":["localhost/minikube-local-cache-test:functional-900196"],"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"
56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-900196"],"size":"494387
7"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125
ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900196 image ls --format json --alsologtostderr:
I1018 14:34:36.720370 1772724 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:36.720766 1772724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.720779 1772724 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:36.720785 1772724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.721148 1772724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:34:36.722137 1772724 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.722292 1772724 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.722929 1772724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.723028 1772724 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.738381 1772724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
I1018 14:34:36.738856 1772724 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.739432 1772724 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.739461 1772724 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.739858 1772724 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.740082 1772724 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:34:36.741987 1772724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.742032 1772724 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.757474 1772724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
I1018 14:34:36.757938 1772724 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.758502 1772724 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.758537 1772724 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.758941 1772724 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.759163 1772724 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:34:36.759376 1772724 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:36.759411 1772724 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:34:36.762683 1772724 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.763136 1772724 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:34:36.763173 1772724 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.763348 1772724 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:34:36.763528 1772724 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:34:36.763810 1772724 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:34:36.763941 1772724 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:34:36.851506 1772724 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 14:34:36.908444 1772724 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.908457 1772724 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.908822 1772724 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.908850 1772724 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.908860 1772724 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.908868 1772724 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.909239 1772724 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.909285 1772724 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.909291 1772724 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900196 image ls --format yaml --alsologtostderr:
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-900196
size: "4943877"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 06de39b489c39faa38cfd2d72e1369d8ccfe772bca031140ca8d53fece378f9e
repoDigests:
- localhost/minikube-local-cache-test@sha256:f544d5580a4ea9f37c92ae85a2e0f740f746475b38be6510682933ec928f8b98
repoTags:
- localhost/minikube-local-cache-test:functional-900196
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900196 image ls --format yaml --alsologtostderr:
I1018 14:34:36.459029 1772640 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:36.459157 1772640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.459163 1772640 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:36.459169 1772640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.459486 1772640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:34:36.460256 1772640 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.460400 1772640 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.460869 1772640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.460949 1772640 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.475377 1772640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
I1018 14:34:36.475905 1772640 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.476663 1772640 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.476704 1772640 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.477100 1772640 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.477377 1772640 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:34:36.480141 1772640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.480190 1772640 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.494762 1772640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
I1018 14:34:36.495267 1772640 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.495805 1772640 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.495837 1772640 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.496292 1772640 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.496521 1772640 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:34:36.496758 1772640 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:36.496789 1772640 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:34:36.500589 1772640 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.501215 1772640 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:34:36.501246 1772640 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.501455 1772640 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:34:36.501607 1772640 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:34:36.501744 1772640 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:34:36.501928 1772640 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:34:36.581548 1772640 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 14:34:36.653142 1772640 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.653160 1772640 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.653483 1772640 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.653503 1772640 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.653521 1772640 main.go:141] libmachine: Making call to close driver server
I1018 14:34:36.653529 1772640 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:36.653782 1772640 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:36.653803 1772640 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:36.653828 1772640 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900196 ssh pgrep buildkitd: exit status 1 (218.008006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image build -t localhost/my-image:functional-900196 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 image build -t localhost/my-image:functional-900196 testdata/build --alsologtostderr: (2.034309811s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900196 image build -t localhost/my-image:functional-900196 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8477f896d87
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-900196
--> 7ada025137e
Successfully tagged localhost/my-image:functional-900196
7ada025137e72433e4151d3dc0cab89d069321b92910a79013d87a250ac74aaa
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900196 image build -t localhost/my-image:functional-900196 testdata/build --alsologtostderr:
I1018 14:34:36.679207 1772715 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:36.679503 1772715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.679516 1772715 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:36.679525 1772715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:36.679849 1772715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
I1018 14:34:36.680695 1772715 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.681505 1772715 config.go:182] Loaded profile config "functional-900196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:36.682062 1772715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.682152 1772715 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.699728 1772715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
I1018 14:34:36.700367 1772715 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.701069 1772715 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.701094 1772715 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.701508 1772715 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.701757 1772715 main.go:141] libmachine: (functional-900196) Calling .GetState
I1018 14:34:36.704196 1772715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 14:34:36.704247 1772715 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 14:34:36.723794 1772715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
I1018 14:34:36.724296 1772715 main.go:141] libmachine: () Calling .GetVersion
I1018 14:34:36.724963 1772715 main.go:141] libmachine: Using API Version  1
I1018 14:34:36.725004 1772715 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 14:34:36.725461 1772715 main.go:141] libmachine: () Calling .GetMachineName
I1018 14:34:36.725689 1772715 main.go:141] libmachine: (functional-900196) Calling .DriverName
I1018 14:34:36.725982 1772715 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:36.726010 1772715 main.go:141] libmachine: (functional-900196) Calling .GetSSHHostname
I1018 14:34:36.729680 1772715 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.730172 1772715 main.go:141] libmachine: (functional-900196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a4:ac", ip: ""} in network mk-functional-900196: {Iface:virbr1 ExpiryTime:2025-10-18 15:22:18 +0000 UTC Type:0 Mac:52:54:00:e2:a4:ac Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:functional-900196 Clientid:01:52:54:00:e2:a4:ac}
I1018 14:34:36.730211 1772715 main.go:141] libmachine: (functional-900196) DBG | domain functional-900196 has defined IP address 192.168.39.34 and MAC address 52:54:00:e2:a4:ac in network mk-functional-900196
I1018 14:34:36.730498 1772715 main.go:141] libmachine: (functional-900196) Calling .GetSSHPort
I1018 14:34:36.730705 1772715 main.go:141] libmachine: (functional-900196) Calling .GetSSHKeyPath
I1018 14:34:36.730917 1772715 main.go:141] libmachine: (functional-900196) Calling .GetSSHUsername
I1018 14:34:36.731090 1772715 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/functional-900196/id_rsa Username:docker}
I1018 14:34:36.817700 1772715 build_images.go:161] Building image from path: /tmp/build.3867854511.tar
I1018 14:34:36.817816 1772715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 14:34:36.831735 1772715 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3867854511.tar
I1018 14:34:36.837471 1772715 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3867854511.tar: stat -c "%s %y" /var/lib/minikube/build/build.3867854511.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3867854511.tar': No such file or directory
I1018 14:34:36.837507 1772715 ssh_runner.go:362] scp /tmp/build.3867854511.tar --> /var/lib/minikube/build/build.3867854511.tar (3072 bytes)
I1018 14:34:36.896675 1772715 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3867854511
I1018 14:34:36.932785 1772715 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3867854511 -xf /var/lib/minikube/build/build.3867854511.tar
I1018 14:34:36.948448 1772715 crio.go:315] Building image: /var/lib/minikube/build/build.3867854511
I1018 14:34:36.948531 1772715 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-900196 /var/lib/minikube/build/build.3867854511 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 14:34:38.618323 1772715 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-900196 /var/lib/minikube/build/build.3867854511 --cgroup-manager=cgroupfs: (1.669754068s)
I1018 14:34:38.618420 1772715 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3867854511
I1018 14:34:38.634675 1772715 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3867854511.tar
I1018 14:34:38.648717 1772715 build_images.go:217] Built localhost/my-image:functional-900196 from /tmp/build.3867854511.tar
I1018 14:34:38.648770 1772715 build_images.go:133] succeeded building to: functional-900196
I1018 14:34:38.648776 1772715 build_images.go:134] failed building to: 
I1018 14:34:38.648849 1772715 main.go:141] libmachine: Making call to close driver server
I1018 14:34:38.648872 1772715 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:38.649194 1772715 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:38.649214 1772715 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 14:34:38.649224 1772715 main.go:141] libmachine: Making call to close driver server
I1018 14:34:38.649231 1772715 main.go:141] libmachine: (functional-900196) Calling .Close
I1018 14:34:38.649537 1772715 main.go:141] libmachine: (functional-900196) DBG | Closing plugin on server side
I1018 14:34:38.649635 1772715 main.go:141] libmachine: Successfully made call to close driver server
I1018 14:34:38.649662 1772715 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
E1018 14:36:00.845560 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.019951214s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-900196
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr: (1.511210661s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-900196
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image load --daemon kicbase/echo-server:functional-900196 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image save kicbase/echo-server:functional-900196 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image rm kicbase/echo-server:functional-900196 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-900196
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-900196 image save --daemon kicbase/echo-server:functional-900196 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-900196
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-900196
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-900196
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-900196
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 14:39:24.563738 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.570610 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.582837 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.604267 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.645780 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.727321 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:24.888938 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:25.210560 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:25.852689 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:27.134372 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:29.696142 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:34.818126 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:39:45.059692 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:40:05.541509 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:40:46.503519 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m36.038991694s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (216.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- rollout status deployment/busybox
E1018 14:41:00.845270 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 kubectl -- rollout status deployment/busybox: (3.189319994s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-2jgsh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-5lpt8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-n66pq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-2jgsh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-5lpt8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-n66pq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-2jgsh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-5lpt8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-n66pq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-2jgsh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-2jgsh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-5lpt8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-5lpt8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-n66pq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 kubectl -- exec busybox-7b57f96db7-n66pq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 node add --alsologtostderr -v 5: (46.035902901s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-609178 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp testdata/cp-test.txt ha-609178:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3027812356/001/cp-test_ha-609178.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178:/home/docker/cp-test.txt ha-609178-m02:/home/docker/cp-test_ha-609178_ha-609178-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test_ha-609178_ha-609178-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178:/home/docker/cp-test.txt ha-609178-m03:/home/docker/cp-test_ha-609178_ha-609178-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test_ha-609178_ha-609178-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178:/home/docker/cp-test.txt ha-609178-m04:/home/docker/cp-test_ha-609178_ha-609178-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test_ha-609178_ha-609178-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp testdata/cp-test.txt ha-609178-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3027812356/001/cp-test_ha-609178-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m02:/home/docker/cp-test.txt ha-609178:/home/docker/cp-test_ha-609178-m02_ha-609178.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test_ha-609178-m02_ha-609178.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m02:/home/docker/cp-test.txt ha-609178-m03:/home/docker/cp-test_ha-609178-m02_ha-609178-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test_ha-609178-m02_ha-609178-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m02:/home/docker/cp-test.txt ha-609178-m04:/home/docker/cp-test_ha-609178-m02_ha-609178-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test_ha-609178-m02_ha-609178-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp testdata/cp-test.txt ha-609178-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3027812356/001/cp-test_ha-609178-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m03:/home/docker/cp-test.txt ha-609178:/home/docker/cp-test_ha-609178-m03_ha-609178.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test_ha-609178-m03_ha-609178.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m03:/home/docker/cp-test.txt ha-609178-m02:/home/docker/cp-test_ha-609178-m03_ha-609178-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test_ha-609178-m03_ha-609178-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m03:/home/docker/cp-test.txt ha-609178-m04:/home/docker/cp-test_ha-609178-m03_ha-609178-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test_ha-609178-m03_ha-609178-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp testdata/cp-test.txt ha-609178-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3027812356/001/cp-test_ha-609178-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m04:/home/docker/cp-test.txt ha-609178:/home/docker/cp-test_ha-609178-m04_ha-609178.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178 "sudo cat /home/docker/cp-test_ha-609178-m04_ha-609178.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m04:/home/docker/cp-test.txt ha-609178-m02:/home/docker/cp-test_ha-609178-m04_ha-609178-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m02 "sudo cat /home/docker/cp-test_ha-609178-m04_ha-609178-m02.txt"
E1018 14:42:08.425706 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 cp ha-609178-m04:/home/docker/cp-test.txt ha-609178-m03:/home/docker/cp-test_ha-609178-m04_ha-609178-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 ssh -n ha-609178-m03 "sudo cat /home/docker/cp-test_ha-609178-m04_ha-609178-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node stop m02 --alsologtostderr -v 5
E1018 14:42:23.911848 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 node stop m02 --alsologtostderr -v 5: (1m25.693382157s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5: exit status 7 (720.187042ms)

                                                
                                                
-- stdout --
	ha-609178
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-609178-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-609178-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-609178-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:43:35.119935 1777999 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:43:35.120195 1777999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:43:35.120205 1777999 out.go:374] Setting ErrFile to fd 2...
	I1018 14:43:35.120208 1777999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:43:35.120463 1777999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:43:35.120661 1777999 out.go:368] Setting JSON to false
	I1018 14:43:35.120697 1777999 mustload.go:65] Loading cluster: ha-609178
	I1018 14:43:35.120780 1777999 notify.go:220] Checking for updates...
	I1018 14:43:35.121239 1777999 config.go:182] Loaded profile config "ha-609178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:43:35.121259 1777999 status.go:174] checking status of ha-609178 ...
	I1018 14:43:35.121853 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.121903 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.144247 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I1018 14:43:35.144861 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.145756 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.145794 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.146222 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.146526 1777999 main.go:141] libmachine: (ha-609178) Calling .GetState
	I1018 14:43:35.148429 1777999 status.go:371] ha-609178 host status = "Running" (err=<nil>)
	I1018 14:43:35.148446 1777999 host.go:66] Checking if "ha-609178" exists ...
	I1018 14:43:35.148822 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.148869 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.163798 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1018 14:43:35.164273 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.164797 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.164825 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.165217 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.165454 1777999 main.go:141] libmachine: (ha-609178) Calling .GetIP
	I1018 14:43:35.168805 1777999 main.go:141] libmachine: (ha-609178) DBG | domain ha-609178 has defined MAC address 52:54:00:47:c0:30 in network mk-ha-609178
	I1018 14:43:35.169362 1777999 main.go:141] libmachine: (ha-609178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:c0:30", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:37:39 +0000 UTC Type:0 Mac:52:54:00:47:c0:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-609178 Clientid:01:52:54:00:47:c0:30}
	I1018 14:43:35.169389 1777999 main.go:141] libmachine: (ha-609178) DBG | domain ha-609178 has defined IP address 192.168.39.43 and MAC address 52:54:00:47:c0:30 in network mk-ha-609178
	I1018 14:43:35.169585 1777999 host.go:66] Checking if "ha-609178" exists ...
	I1018 14:43:35.169917 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.169975 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.184382 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I1018 14:43:35.184933 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.185495 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.185525 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.185944 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.186173 1777999 main.go:141] libmachine: (ha-609178) Calling .DriverName
	I1018 14:43:35.186446 1777999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:43:35.186494 1777999 main.go:141] libmachine: (ha-609178) Calling .GetSSHHostname
	I1018 14:43:35.190267 1777999 main.go:141] libmachine: (ha-609178) DBG | domain ha-609178 has defined MAC address 52:54:00:47:c0:30 in network mk-ha-609178
	I1018 14:43:35.190907 1777999 main.go:141] libmachine: (ha-609178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:c0:30", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:37:39 +0000 UTC Type:0 Mac:52:54:00:47:c0:30 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-609178 Clientid:01:52:54:00:47:c0:30}
	I1018 14:43:35.190937 1777999 main.go:141] libmachine: (ha-609178) DBG | domain ha-609178 has defined IP address 192.168.39.43 and MAC address 52:54:00:47:c0:30 in network mk-ha-609178
	I1018 14:43:35.191257 1777999 main.go:141] libmachine: (ha-609178) Calling .GetSSHPort
	I1018 14:43:35.191512 1777999 main.go:141] libmachine: (ha-609178) Calling .GetSSHKeyPath
	I1018 14:43:35.191683 1777999 main.go:141] libmachine: (ha-609178) Calling .GetSSHUsername
	I1018 14:43:35.191890 1777999 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/ha-609178/id_rsa Username:docker}
	I1018 14:43:35.281450 1777999 ssh_runner.go:195] Run: systemctl --version
	I1018 14:43:35.291566 1777999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:43:35.316172 1777999 kubeconfig.go:125] found "ha-609178" server: "https://192.168.39.254:8443"
	I1018 14:43:35.316218 1777999 api_server.go:166] Checking apiserver status ...
	I1018 14:43:35.316257 1777999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:43:35.340859 1777999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W1018 14:43:35.354695 1777999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 14:43:35.354772 1777999 ssh_runner.go:195] Run: ls
	I1018 14:43:35.361155 1777999 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 14:43:35.366286 1777999 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 14:43:35.366318 1777999 status.go:463] ha-609178 apiserver status = Running (err=<nil>)
	I1018 14:43:35.366329 1777999 status.go:176] ha-609178 status: &{Name:ha-609178 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:43:35.366365 1777999 status.go:174] checking status of ha-609178-m02 ...
	I1018 14:43:35.366689 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.366729 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.380369 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1018 14:43:35.380947 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.381594 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.381618 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.381977 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.382170 1777999 main.go:141] libmachine: (ha-609178-m02) Calling .GetState
	I1018 14:43:35.384006 1777999 status.go:371] ha-609178-m02 host status = "Stopped" (err=<nil>)
	I1018 14:43:35.384021 1777999 status.go:384] host is not running, skipping remaining checks
	I1018 14:43:35.384027 1777999 status.go:176] ha-609178-m02 status: &{Name:ha-609178-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:43:35.384048 1777999 status.go:174] checking status of ha-609178-m03 ...
	I1018 14:43:35.384454 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.384542 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.398693 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1018 14:43:35.399324 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.400038 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.400071 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.400443 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.400669 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetState
	I1018 14:43:35.402391 1777999 status.go:371] ha-609178-m03 host status = "Running" (err=<nil>)
	I1018 14:43:35.402409 1777999 host.go:66] Checking if "ha-609178-m03" exists ...
	I1018 14:43:35.402723 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.402760 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.417461 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I1018 14:43:35.418063 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.418665 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.418702 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.419100 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.419334 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetIP
	I1018 14:43:35.422707 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | domain ha-609178-m03 has defined MAC address 52:54:00:1e:56:77 in network mk-ha-609178
	I1018 14:43:35.423327 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:56:77", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:39:44 +0000 UTC Type:0 Mac:52:54:00:1e:56:77 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-609178-m03 Clientid:01:52:54:00:1e:56:77}
	I1018 14:43:35.423372 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | domain ha-609178-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:1e:56:77 in network mk-ha-609178
	I1018 14:43:35.423581 1777999 host.go:66] Checking if "ha-609178-m03" exists ...
	I1018 14:43:35.423928 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.423986 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.438141 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I1018 14:43:35.438754 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.439362 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.439394 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.439788 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.440062 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .DriverName
	I1018 14:43:35.440335 1777999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:43:35.440375 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetSSHHostname
	I1018 14:43:35.443767 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | domain ha-609178-m03 has defined MAC address 52:54:00:1e:56:77 in network mk-ha-609178
	I1018 14:43:35.444306 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:56:77", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:39:44 +0000 UTC Type:0 Mac:52:54:00:1e:56:77 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-609178-m03 Clientid:01:52:54:00:1e:56:77}
	I1018 14:43:35.444375 1777999 main.go:141] libmachine: (ha-609178-m03) DBG | domain ha-609178-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:1e:56:77 in network mk-ha-609178
	I1018 14:43:35.444475 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetSSHPort
	I1018 14:43:35.444669 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetSSHKeyPath
	I1018 14:43:35.444822 1777999 main.go:141] libmachine: (ha-609178-m03) Calling .GetSSHUsername
	I1018 14:43:35.445017 1777999 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/ha-609178-m03/id_rsa Username:docker}
	I1018 14:43:35.531984 1777999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:43:35.554640 1777999 kubeconfig.go:125] found "ha-609178" server: "https://192.168.39.254:8443"
	I1018 14:43:35.554679 1777999 api_server.go:166] Checking apiserver status ...
	I1018 14:43:35.554723 1777999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:43:35.579601 1777999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup
	W1018 14:43:35.594851 1777999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 14:43:35.594925 1777999 ssh_runner.go:195] Run: ls
	I1018 14:43:35.602446 1777999 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 14:43:35.607513 1777999 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 14:43:35.607546 1777999 status.go:463] ha-609178-m03 apiserver status = Running (err=<nil>)
	I1018 14:43:35.607556 1777999 status.go:176] ha-609178-m03 status: &{Name:ha-609178-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:43:35.607579 1777999 status.go:174] checking status of ha-609178-m04 ...
	I1018 14:43:35.607904 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.607955 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.622634 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I1018 14:43:35.623231 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.623849 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.623884 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.624201 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.624433 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetState
	I1018 14:43:35.626356 1777999 status.go:371] ha-609178-m04 host status = "Running" (err=<nil>)
	I1018 14:43:35.626377 1777999 host.go:66] Checking if "ha-609178-m04" exists ...
	I1018 14:43:35.626775 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.626830 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.641365 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I1018 14:43:35.641824 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.642297 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.642323 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.642727 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.642958 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetIP
	I1018 14:43:35.646074 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | domain ha-609178-m04 has defined MAC address 52:54:00:3f:65:32 in network mk-ha-609178
	I1018 14:43:35.646653 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:65:32", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:41:24 +0000 UTC Type:0 Mac:52:54:00:3f:65:32 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-609178-m04 Clientid:01:52:54:00:3f:65:32}
	I1018 14:43:35.646718 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | domain ha-609178-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:3f:65:32 in network mk-ha-609178
	I1018 14:43:35.646898 1777999 host.go:66] Checking if "ha-609178-m04" exists ...
	I1018 14:43:35.647399 1777999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:43:35.647462 1777999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:43:35.662275 1777999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I1018 14:43:35.662837 1777999 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:43:35.663325 1777999 main.go:141] libmachine: Using API Version  1
	I1018 14:43:35.663370 1777999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:43:35.663696 1777999 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:43:35.663852 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .DriverName
	I1018 14:43:35.664024 1777999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:43:35.664052 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetSSHHostname
	I1018 14:43:35.667722 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | domain ha-609178-m04 has defined MAC address 52:54:00:3f:65:32 in network mk-ha-609178
	I1018 14:43:35.668213 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:65:32", ip: ""} in network mk-ha-609178: {Iface:virbr1 ExpiryTime:2025-10-18 15:41:24 +0000 UTC Type:0 Mac:52:54:00:3f:65:32 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-609178-m04 Clientid:01:52:54:00:3f:65:32}
	I1018 14:43:35.668237 1777999 main.go:141] libmachine: (ha-609178-m04) DBG | domain ha-609178-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:3f:65:32 in network mk-ha-609178
	I1018 14:43:35.668436 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetSSHPort
	I1018 14:43:35.668609 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetSSHKeyPath
	I1018 14:43:35.668754 1777999 main.go:141] libmachine: (ha-609178-m04) Calling .GetSSHUsername
	I1018 14:43:35.668903 1777999 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/ha-609178-m04/id_rsa Username:docker}
	I1018 14:43:35.759655 1777999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:43:35.781728 1777999 status.go:176] ha-609178-m04 status: &{Name:ha-609178-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 node start m02 --alsologtostderr -v 5: (36.177384709s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5: (1.10448401s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.16298793s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 stop --alsologtostderr -v 5
E1018 14:44:24.564355 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:44:52.267612 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:46:00.849904 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 stop --alsologtostderr -v 5: (4m8.423711052s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 start --wait true --alsologtostderr -v 5
E1018 14:49:24.563656 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 start --wait true --alsologtostderr -v 5: (2m10.146778361s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 node delete m03 --alsologtostderr -v 5: (18.210898982s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (245.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 stop --alsologtostderr -v 5
E1018 14:51:00.845528 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:54:24.566576 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 stop --alsologtostderr -v 5: (4m5.805102114s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5: exit status 7 (113.491129ms)

                                                
                                                
-- stdout --
	ha-609178
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-609178-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-609178-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:54:59.281314 1782401 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:54:59.281640 1782401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:54:59.281657 1782401 out.go:374] Setting ErrFile to fd 2...
	I1018 14:54:59.281661 1782401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:54:59.281923 1782401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 14:54:59.282175 1782401 out.go:368] Setting JSON to false
	I1018 14:54:59.282209 1782401 mustload.go:65] Loading cluster: ha-609178
	I1018 14:54:59.282319 1782401 notify.go:220] Checking for updates...
	I1018 14:54:59.282822 1782401 config.go:182] Loaded profile config "ha-609178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:54:59.282845 1782401 status.go:174] checking status of ha-609178 ...
	I1018 14:54:59.283447 1782401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:54:59.283490 1782401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:54:59.304233 1782401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1018 14:54:59.304817 1782401 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:54:59.305486 1782401 main.go:141] libmachine: Using API Version  1
	I1018 14:54:59.305522 1782401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:54:59.305903 1782401 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:54:59.306101 1782401 main.go:141] libmachine: (ha-609178) Calling .GetState
	I1018 14:54:59.307881 1782401 status.go:371] ha-609178 host status = "Stopped" (err=<nil>)
	I1018 14:54:59.307895 1782401 status.go:384] host is not running, skipping remaining checks
	I1018 14:54:59.307901 1782401 status.go:176] ha-609178 status: &{Name:ha-609178 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:54:59.307937 1782401 status.go:174] checking status of ha-609178-m02 ...
	I1018 14:54:59.308247 1782401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:54:59.308289 1782401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:54:59.322045 1782401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
	I1018 14:54:59.322513 1782401 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:54:59.322949 1782401 main.go:141] libmachine: Using API Version  1
	I1018 14:54:59.322972 1782401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:54:59.323281 1782401 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:54:59.323490 1782401 main.go:141] libmachine: (ha-609178-m02) Calling .GetState
	I1018 14:54:59.325271 1782401 status.go:371] ha-609178-m02 host status = "Stopped" (err=<nil>)
	I1018 14:54:59.325289 1782401 status.go:384] host is not running, skipping remaining checks
	I1018 14:54:59.325312 1782401 status.go:176] ha-609178-m02 status: &{Name:ha-609178-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:54:59.325339 1782401 status.go:174] checking status of ha-609178-m04 ...
	I1018 14:54:59.325659 1782401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 14:54:59.325697 1782401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 14:54:59.339172 1782401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I1018 14:54:59.339636 1782401 main.go:141] libmachine: () Calling .GetVersion
	I1018 14:54:59.340107 1782401 main.go:141] libmachine: Using API Version  1
	I1018 14:54:59.340123 1782401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 14:54:59.340473 1782401 main.go:141] libmachine: () Calling .GetMachineName
	I1018 14:54:59.340713 1782401 main.go:141] libmachine: (ha-609178-m04) Calling .GetState
	I1018 14:54:59.342630 1782401 status.go:371] ha-609178-m04 host status = "Stopped" (err=<nil>)
	I1018 14:54:59.342648 1782401 status.go:384] host is not running, skipping remaining checks
	I1018 14:54:59.342656 1782401 status.go:176] ha-609178-m04 status: &{Name:ha-609178-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (245.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (105.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 14:55:47.629458 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:56:00.845637 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.11001453s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (105.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (110.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-609178 node add --control-plane --alsologtostderr -v 5: (1m49.572933617s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-609178 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (110.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-986267 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 14:59:03.915437 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:59:24.563835 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-986267 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.976751604s)
--- PASS: TestJSONOutput/start/Command (82.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-986267 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-986267 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-986267 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-986267 --output=json --user=testUser: (7.051287576s)
--- PASS: TestJSONOutput/stop/Command (7.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-727249 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-727249 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.953709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d469e52-6f3a-4d8e-849d-7729544ffcfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-727249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f26275de-b3f9-45bc-b396-7bdad5985d9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"44e46740-28dd-4e97-990c-dccd531180e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a29ae46b-b92d-490a-8425-ca8cd38e86c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig"}}
	{"specversion":"1.0","id":"add18974-9e8e-4042-8193-d5e0784e9d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube"}}
	{"specversion":"1.0","id":"56ebc041-4f53-4a56-b1bd-793085058944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6d87c444-4cd9-4562-9fb7-23fbd4c0a229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c4528582-00a4-4a5e-ac1d-2bcb059c319c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-727249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-727249
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-407043 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-407043 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.023887275s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-410017 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:01:00.848557 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-410017 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.907447087s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-407043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-410017
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-410017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-410017
helpers_test.go:175: Cleaning up "first-407043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-407043
--- PASS: TestMinikubeProfile (88.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-544462 --memory=3072 --mount-string /tmp/TestMountStartserial3467606552/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-544462 --memory=3072 --mount-string /tmp/TestMountStartserial3467606552/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.371248966s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-544462 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-544462 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-563458 --memory=3072 --mount-string /tmp/TestMountStartserial3467606552/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-563458 --memory=3072 --mount-string /tmp/TestMountStartserial3467606552/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.241344832s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-544462 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-563458
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-563458: (1.334147392s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-563458
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-563458: (19.570059899s)
--- PASS: TestMountStart/serial/RestartStopped (20.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-563458 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019263 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:04:24.563528 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019263 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m11.684678293s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-019263 -- rollout status deployment/busybox: (2.650821581s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-hfgbv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-pgn8x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-hfgbv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-pgn8x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-hfgbv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-pgn8x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-hfgbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-hfgbv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-pgn8x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019263 -- exec busybox-7b57f96db7-pgn8x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-019263 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-019263 -v=5 --alsologtostderr: (42.089735801s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-019263 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp testdata/cp-test.txt multinode-019263:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2462144885/001/cp-test_multinode-019263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263:/home/docker/cp-test.txt multinode-019263-m02:/home/docker/cp-test_multinode-019263_multinode-019263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test_multinode-019263_multinode-019263-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263:/home/docker/cp-test.txt multinode-019263-m03:/home/docker/cp-test_multinode-019263_multinode-019263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test_multinode-019263_multinode-019263-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp testdata/cp-test.txt multinode-019263-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2462144885/001/cp-test_multinode-019263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m02:/home/docker/cp-test.txt multinode-019263:/home/docker/cp-test_multinode-019263-m02_multinode-019263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test_multinode-019263-m02_multinode-019263.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m02:/home/docker/cp-test.txt multinode-019263-m03:/home/docker/cp-test_multinode-019263-m02_multinode-019263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test_multinode-019263-m02_multinode-019263-m03.txt"
E1018 15:06:00.844902 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp testdata/cp-test.txt multinode-019263-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2462144885/001/cp-test_multinode-019263-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m03:/home/docker/cp-test.txt multinode-019263:/home/docker/cp-test_multinode-019263-m03_multinode-019263.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263 "sudo cat /home/docker/cp-test_multinode-019263-m03_multinode-019263.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 cp multinode-019263-m03:/home/docker/cp-test.txt multinode-019263-m02:/home/docker/cp-test_multinode-019263-m03_multinode-019263-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 ssh -n multinode-019263-m02 "sudo cat /home/docker/cp-test_multinode-019263-m03_multinode-019263-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-019263 node stop m03: (1.692386151s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019263 status: exit status 7 (457.993531ms)

                                                
                                                
-- stdout --
	multinode-019263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-019263-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-019263-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr: exit status 7 (461.881397ms)

                                                
                                                
-- stdout --
	multinode-019263
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-019263-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-019263-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:06:05.482194 1790375 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:05.482497 1790375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:05.482509 1790375 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:05.482513 1790375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:05.482723 1790375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:06:05.482900 1790375 out.go:368] Setting JSON to false
	I1018 15:06:05.482932 1790375 mustload.go:65] Loading cluster: multinode-019263
	I1018 15:06:05.483012 1790375 notify.go:220] Checking for updates...
	I1018 15:06:05.483495 1790375 config.go:182] Loaded profile config "multinode-019263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:05.483517 1790375 status.go:174] checking status of multinode-019263 ...
	I1018 15:06:05.484035 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.484080 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.505024 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I1018 15:06:05.505659 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.506328 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.506403 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.506780 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.507001 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetState
	I1018 15:06:05.508939 1790375 status.go:371] multinode-019263 host status = "Running" (err=<nil>)
	I1018 15:06:05.508955 1790375 host.go:66] Checking if "multinode-019263" exists ...
	I1018 15:06:05.509266 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.509305 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.523810 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I1018 15:06:05.524334 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.524898 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.524917 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.525254 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.525493 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetIP
	I1018 15:06:05.528498 1790375 main.go:141] libmachine: (multinode-019263) DBG | domain multinode-019263 has defined MAC address 52:54:00:0f:fe:13 in network mk-multinode-019263
	I1018 15:06:05.529023 1790375 main.go:141] libmachine: (multinode-019263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:fe:13", ip: ""} in network mk-multinode-019263: {Iface:virbr1 ExpiryTime:2025-10-18 16:03:11 +0000 UTC Type:0 Mac:52:54:00:0f:fe:13 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-019263 Clientid:01:52:54:00:0f:fe:13}
	I1018 15:06:05.529055 1790375 main.go:141] libmachine: (multinode-019263) DBG | domain multinode-019263 has defined IP address 192.168.39.124 and MAC address 52:54:00:0f:fe:13 in network mk-multinode-019263
	I1018 15:06:05.529249 1790375 host.go:66] Checking if "multinode-019263" exists ...
	I1018 15:06:05.529785 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.529857 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.544286 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1018 15:06:05.544838 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.545368 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.545395 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.545726 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.545949 1790375 main.go:141] libmachine: (multinode-019263) Calling .DriverName
	I1018 15:06:05.546190 1790375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:06:05.546225 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetSSHHostname
	I1018 15:06:05.549292 1790375 main.go:141] libmachine: (multinode-019263) DBG | domain multinode-019263 has defined MAC address 52:54:00:0f:fe:13 in network mk-multinode-019263
	I1018 15:06:05.549916 1790375 main.go:141] libmachine: (multinode-019263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:fe:13", ip: ""} in network mk-multinode-019263: {Iface:virbr1 ExpiryTime:2025-10-18 16:03:11 +0000 UTC Type:0 Mac:52:54:00:0f:fe:13 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-019263 Clientid:01:52:54:00:0f:fe:13}
	I1018 15:06:05.549949 1790375 main.go:141] libmachine: (multinode-019263) DBG | domain multinode-019263 has defined IP address 192.168.39.124 and MAC address 52:54:00:0f:fe:13 in network mk-multinode-019263
	I1018 15:06:05.550129 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetSSHPort
	I1018 15:06:05.550316 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetSSHKeyPath
	I1018 15:06:05.550509 1790375 main.go:141] libmachine: (multinode-019263) Calling .GetSSHUsername
	I1018 15:06:05.550692 1790375 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/multinode-019263/id_rsa Username:docker}
	I1018 15:06:05.637031 1790375 ssh_runner.go:195] Run: systemctl --version
	I1018 15:06:05.643789 1790375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:05.662487 1790375 kubeconfig.go:125] found "multinode-019263" server: "https://192.168.39.124:8443"
	I1018 15:06:05.662544 1790375 api_server.go:166] Checking apiserver status ...
	I1018 15:06:05.662625 1790375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:05.683807 1790375 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W1018 15:06:05.696475 1790375 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:06:05.696551 1790375 ssh_runner.go:195] Run: ls
	I1018 15:06:05.702484 1790375 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1018 15:06:05.708729 1790375 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1018 15:06:05.708761 1790375 status.go:463] multinode-019263 apiserver status = Running (err=<nil>)
	I1018 15:06:05.708775 1790375 status.go:176] multinode-019263 status: &{Name:multinode-019263 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 15:06:05.708798 1790375 status.go:174] checking status of multinode-019263-m02 ...
	I1018 15:06:05.709169 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.709215 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.723577 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I1018 15:06:05.724054 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.724532 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.724556 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.724917 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.725140 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetState
	I1018 15:06:05.727033 1790375 status.go:371] multinode-019263-m02 host status = "Running" (err=<nil>)
	I1018 15:06:05.727053 1790375 host.go:66] Checking if "multinode-019263-m02" exists ...
	I1018 15:06:05.727370 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.727410 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.742000 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I1018 15:06:05.742554 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.743034 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.743056 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.743443 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.743662 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetIP
	I1018 15:06:05.747097 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | domain multinode-019263-m02 has defined MAC address 52:54:00:cb:0d:e3 in network mk-multinode-019263
	I1018 15:06:05.747550 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:0d:e3", ip: ""} in network mk-multinode-019263: {Iface:virbr1 ExpiryTime:2025-10-18 16:04:36 +0000 UTC Type:0 Mac:52:54:00:cb:0d:e3 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-019263-m02 Clientid:01:52:54:00:cb:0d:e3}
	I1018 15:06:05.747576 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | domain multinode-019263-m02 has defined IP address 192.168.39.193 and MAC address 52:54:00:cb:0d:e3 in network mk-multinode-019263
	I1018 15:06:05.747756 1790375 host.go:66] Checking if "multinode-019263-m02" exists ...
	I1018 15:06:05.748061 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.748099 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.762250 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I1018 15:06:05.762755 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.763189 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.763214 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.763573 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.763835 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .DriverName
	I1018 15:06:05.764057 1790375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:06:05.764092 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetSSHHostname
	I1018 15:06:05.767237 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | domain multinode-019263-m02 has defined MAC address 52:54:00:cb:0d:e3 in network mk-multinode-019263
	I1018 15:06:05.767802 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:0d:e3", ip: ""} in network mk-multinode-019263: {Iface:virbr1 ExpiryTime:2025-10-18 16:04:36 +0000 UTC Type:0 Mac:52:54:00:cb:0d:e3 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-019263-m02 Clientid:01:52:54:00:cb:0d:e3}
	I1018 15:06:05.767835 1790375 main.go:141] libmachine: (multinode-019263-m02) DBG | domain multinode-019263-m02 has defined IP address 192.168.39.193 and MAC address 52:54:00:cb:0d:e3 in network mk-multinode-019263
	I1018 15:06:05.767999 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetSSHPort
	I1018 15:06:05.768172 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetSSHKeyPath
	I1018 15:06:05.768299 1790375 main.go:141] libmachine: (multinode-019263-m02) Calling .GetSSHUsername
	I1018 15:06:05.768446 1790375 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1755824/.minikube/machines/multinode-019263-m02/id_rsa Username:docker}
	I1018 15:06:05.852888 1790375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:05.871699 1790375 status.go:176] multinode-019263-m02 status: &{Name:multinode-019263-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 15:06:05.871751 1790375 status.go:174] checking status of multinode-019263-m03 ...
	I1018 15:06:05.872190 1790375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:06:05.872244 1790375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:06:05.887321 1790375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I1018 15:06:05.887936 1790375 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:06:05.888471 1790375 main.go:141] libmachine: Using API Version  1
	I1018 15:06:05.888497 1790375 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:06:05.888843 1790375 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:06:05.889058 1790375 main.go:141] libmachine: (multinode-019263-m03) Calling .GetState
	I1018 15:06:05.890830 1790375 status.go:371] multinode-019263-m03 host status = "Stopped" (err=<nil>)
	I1018 15:06:05.890843 1790375 status.go:384] host is not running, skipping remaining checks
	I1018 15:06:05.890849 1790375 status.go:176] multinode-019263-m03 status: &{Name:multinode-019263-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.61s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-019263 node start m03 -v=5 --alsologtostderr: (39.800085337s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019263
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-019263
E1018 15:09:24.566007 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-019263: (2m55.936480445s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019263 --wait=true -v=5 --alsologtostderr
E1018 15:11:00.844604 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019263 --wait=true -v=5 --alsologtostderr: (2m16.516122454s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019263
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-019263 node delete m03: (2.279001664s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (171.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 stop
E1018 15:12:27.633086 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:14:24.565921 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-019263 stop: (2m51.622679194s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019263 status: exit status 7 (96.958087ms)

                                                
                                                
-- stdout --
	multinode-019263
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-019263-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr: exit status 7 (91.887512ms)

                                                
                                                
-- stdout --
	multinode-019263
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-019263-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:14:53.545061 1793235 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:14:53.545316 1793235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:14:53.545324 1793235 out.go:374] Setting ErrFile to fd 2...
	I1018 15:14:53.545329 1793235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:14:53.545533 1793235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:14:53.545732 1793235 out.go:368] Setting JSON to false
	I1018 15:14:53.545763 1793235 mustload.go:65] Loading cluster: multinode-019263
	I1018 15:14:53.545850 1793235 notify.go:220] Checking for updates...
	I1018 15:14:53.546138 1793235 config.go:182] Loaded profile config "multinode-019263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:14:53.546153 1793235 status.go:174] checking status of multinode-019263 ...
	I1018 15:14:53.546546 1793235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:14:53.546585 1793235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:14:53.564783 1793235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1018 15:14:53.565395 1793235 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:14:53.566004 1793235 main.go:141] libmachine: Using API Version  1
	I1018 15:14:53.566027 1793235 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:14:53.566471 1793235 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:14:53.566808 1793235 main.go:141] libmachine: (multinode-019263) Calling .GetState
	I1018 15:14:53.568978 1793235 status.go:371] multinode-019263 host status = "Stopped" (err=<nil>)
	I1018 15:14:53.568995 1793235 status.go:384] host is not running, skipping remaining checks
	I1018 15:14:53.569001 1793235 status.go:176] multinode-019263 status: &{Name:multinode-019263 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 15:14:53.569025 1793235 status.go:174] checking status of multinode-019263-m02 ...
	I1018 15:14:53.569339 1793235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 15:14:53.569422 1793235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 15:14:53.583618 1793235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34219
	I1018 15:14:53.584119 1793235 main.go:141] libmachine: () Calling .GetVersion
	I1018 15:14:53.584603 1793235 main.go:141] libmachine: Using API Version  1
	I1018 15:14:53.584625 1793235 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 15:14:53.585009 1793235 main.go:141] libmachine: () Calling .GetMachineName
	I1018 15:14:53.585235 1793235 main.go:141] libmachine: (multinode-019263-m02) Calling .GetState
	I1018 15:14:53.587274 1793235 status.go:371] multinode-019263-m02 host status = "Stopped" (err=<nil>)
	I1018 15:14:53.587289 1793235 status.go:384] host is not running, skipping remaining checks
	I1018 15:14:53.587298 1793235 status.go:176] multinode-019263-m02 status: &{Name:multinode-019263-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (171.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019263 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:15:43.917261 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:16:00.844588 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019263 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m57.887657389s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019263 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019263
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019263-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-019263-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (71.067521ms)

                                                
                                                
-- stdout --
	* [multinode-019263-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-019263-m02' is duplicated with machine name 'multinode-019263-m02' in profile 'multinode-019263'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019263-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019263-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.686944724s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-019263
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-019263: exit status 80 (233.407933ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-019263 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-019263-m03 already exists in multinode-019263-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-019263-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.92s)

                                                
                                    
x
+
TestScheduledStopUnix (110.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-287725 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:21:00.844611 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-287725 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.921149999s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-287725 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-287725 -n scheduled-stop-287725
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-287725 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 15:21:02.109152 1759792 retry.go:31] will retry after 116.182µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.110334 1759792 retry.go:31] will retry after 102.879µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.111501 1759792 retry.go:31] will retry after 200.557µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.112672 1759792 retry.go:31] will retry after 387.793µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.113839 1759792 retry.go:31] will retry after 580.993µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.114989 1759792 retry.go:31] will retry after 914.387µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.116130 1759792 retry.go:31] will retry after 819.871µs: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.117287 1759792 retry.go:31] will retry after 1.074731ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.119479 1759792 retry.go:31] will retry after 1.380903ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.121744 1759792 retry.go:31] will retry after 2.920611ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.124964 1759792 retry.go:31] will retry after 7.433202ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.133197 1759792 retry.go:31] will retry after 6.975074ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.140426 1759792 retry.go:31] will retry after 10.548824ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.151690 1759792 retry.go:31] will retry after 21.441908ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
I1018 15:21:02.174042 1759792 retry.go:31] will retry after 31.066313ms: open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/scheduled-stop-287725/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-287725 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-287725 -n scheduled-stop-287725
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-287725
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-287725 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-287725
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-287725: exit status 7 (76.417701ms)

                                                
                                                
-- stdout --
	scheduled-stop-287725
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-287725 -n scheduled-stop-287725
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-287725 -n scheduled-stop-287725: exit status 7 (69.136817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-287725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-287725
--- PASS: TestScheduledStopUnix (110.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (163.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.279390798 start -p running-upgrade-607040 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.279390798 start -p running-upgrade-607040 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.750419474s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-607040 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-607040 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.942340541s)
helpers_test.go:175: Cleaning up "running-upgrade-607040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-607040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-607040: (2.104239726s)
--- PASS: TestRunningBinaryUpgrade (163.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (149.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.159329157s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-075048
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-075048: (2.018267943s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-075048 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-075048 status --format={{.Host}}: exit status 7 (82.346001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 15:26:00.845552 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.287795562s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-075048 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (95.281048ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-075048] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-075048
	    minikube start -p kubernetes-upgrade-075048 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0750482 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-075048 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.505430557s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-075048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-075048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-075048: (1.052981745s)
--- PASS: TestKubernetesUpgrade (149.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (86.666919ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-479967] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479967 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479967 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.056190101s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-479967 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-320866 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-320866 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (117.969056ms)

                                                
                                                
-- stdout --
	* [false-320866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:23:01.748257 1798628 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:23:01.748595 1798628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:23:01.748610 1798628 out.go:374] Setting ErrFile to fd 2...
	I1018 15:23:01.748617 1798628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:23:01.748934 1798628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1755824/.minikube/bin
	I1018 15:23:01.749664 1798628 out.go:368] Setting JSON to false
	I1018 15:23:01.750989 1798628 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":25530,"bootTime":1760775452,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:23:01.751133 1798628 start.go:141] virtualization: kvm guest
	I1018 15:23:01.753332 1798628 out.go:179] * [false-320866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:23:01.754953 1798628 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:23:01.754990 1798628 notify.go:220] Checking for updates...
	I1018 15:23:01.758011 1798628 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:23:01.759284 1798628 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1755824/kubeconfig
	I1018 15:23:01.760758 1798628 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1755824/.minikube
	I1018 15:23:01.762298 1798628 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:23:01.763798 1798628 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:23:01.765575 1798628 config.go:182] Loaded profile config "NoKubernetes-479967": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:23:01.765684 1798628 config.go:182] Loaded profile config "offline-crio-459651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:23:01.765769 1798628 config.go:182] Loaded profile config "running-upgrade-607040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 15:23:01.765850 1798628 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:23:01.806473 1798628 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 15:23:01.807944 1798628 start.go:305] selected driver: kvm2
	I1018 15:23:01.807962 1798628 start.go:925] validating driver "kvm2" against <nil>
	I1018 15:23:01.807976 1798628 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:23:01.809925 1798628 out.go:203] 
	W1018 15:23:01.811063 1798628 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 15:23:01.812197 1798628 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-320866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-320866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-320866"

                                                
                                                
----------------------- debugLogs end: false-320866 [took: 4.426611161s] --------------------------------
helpers_test.go:175: Cleaning up "false-320866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-320866
--- PASS: TestNetworkPlugins/group/false (4.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.693930143s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-479967 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-479967 status -o json: exit status 2 (244.118756ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-479967","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-479967
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479967 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.872119337s)
--- PASS: TestNoKubernetes/serial/Start (44.87s)

                                                
                                    
x
+
TestPause/serial/Start (127.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-153767 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-153767 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m7.161226059s)
--- PASS: TestPause/serial/Start (127.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-479967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-479967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.126528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-479967
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-479967: (1.379557229s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (58.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479967 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479967 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.136915553s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (58.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-479967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-479967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.418627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.819796594 start -p stopped-upgrade-646879 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.819796594 start -p stopped-upgrade-646879 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.847347126s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.819796594 -p stopped-upgrade-646879 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.819796594 -p stopped-upgrade-646879 stop: (2.492854687s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-646879 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-646879 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.47327966s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (112.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.68204978s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.31487463s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-646879
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-646879: (1.49529766s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.364829158s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-320866 "pgrep -a kubelet"
I1018 15:28:39.006942 1759792 config.go:182] Loaded profile config "auto-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fc5wr" [9ec2e83d-2a7a-48b7-b64a-3fb5ba500cc8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fc5wr" [9ec2e83d-2a7a-48b7-b64a-3fb5ba500cc8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005824701s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hmttk" [ac6dfb0e-3ddd-4f25-92c8-b12277db4ed9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004603167s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.544272432s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-320866 "pgrep -a kubelet"
I1018 15:29:04.214243 1759792 config.go:182] Loaded profile config "kindnet-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f6ldf" [5014e3d8-4665-4746-b411-865bb9ce99f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 15:29:07.635562 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f6ldf" [5014e3d8-4665-4746-b411-865bb9ce99f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004920567s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.568727613s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-kqrbb" [f50659b8-ac3a-430b-87e7-a9a3cb90ac50] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-kqrbb" [f50659b8-ac3a-430b-87e7-a9a3cb90ac50] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006782508s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-320866 "pgrep -a kubelet"
I1018 15:29:35.395331 1759792 config.go:182] Loaded profile config "calico-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zds72" [cfc2dfb2-a278-42a6-b826-144f1ec05082] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zds72" [cfc2dfb2-a278-42a6-b826-144f1ec05082] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004609872s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.749174957s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-320866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.830115314s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-320866 "pgrep -a kubelet"
I1018 15:30:28.036400 1759792 config.go:182] Loaded profile config "custom-flannel-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s9jvf" [5b710990-ebad-4feb-a5a7-76d009e2bf32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s9jvf" [5b710990-ebad-4feb-a5a7-76d009e2bf32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004587218s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-320866 "pgrep -a kubelet"
I1018 15:30:29.859957 1759792 config.go:182] Loaded profile config "enable-default-cni-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qpz67" [3b3a5fad-cec6-4b74-938c-a8a62e5abb9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qpz67" [3b3a5fad-cec6-4b74-938c-a8a62e5abb9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005842949s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (97.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-681355 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-681355 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m37.076438429s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (97.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (112.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-859736 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 15:31:00.844939 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-859736 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m52.044066168s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (112.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wfh79" [764e1104-f0cd-4d22-8161-65468fc48ad6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004199317s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-320866 "pgrep -a kubelet"
I1018 15:31:12.466207 1759792 config.go:182] Loaded profile config "flannel-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7cgzk" [adece61b-4161-4101-be47-3c2b9c41e639] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7cgzk" [adece61b-4161-4101-be47-3c2b9c41e639] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006363845s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (109.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-922654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-922654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m49.159041843s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (109.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-320866 "pgrep -a kubelet"
I1018 15:31:42.454279 1759792 config.go:182] Loaded profile config "bridge-320866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-320866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fqvhk" [1c50ca63-f740-4051-805e-ed869c97a97e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fqvhk" [1c50ca63-f740-4051-805e-ed869c97a97e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005562512s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-320866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-320866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E1018 15:35:40.411425 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:48.823326 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:50.653574 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:50.751830 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:00.845129 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.249998 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.256458 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.267920 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.289442 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.331104 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.412775 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.574140 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:06.895528 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:07.537498 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:08.819382 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:09.304888 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-161412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 15:32:23.919400 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/addons-891059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-161412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m28.17258545s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-681355 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f9dfeedd-f810-4955-8f57-d9791edc2448] Pending
helpers_test.go:352: "busybox" [f9dfeedd-f810-4955-8f57-d9791edc2448] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f9dfeedd-f810-4955-8f57-d9791edc2448] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.095245262s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-681355 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-681355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-681355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295020231s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-681355 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (90.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-681355 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-681355 --alsologtostderr -v=3: (1m30.476199156s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (90.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-859736 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b7afacb2-932f-4f90-abf5-8dff44b37fae] Pending
helpers_test.go:352: "busybox" [b7afacb2-932f-4f90-abf5-8dff44b37fae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b7afacb2-932f-4f90-abf5-8dff44b37fae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00528284s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-859736 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-859736 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-859736 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144427152s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-859736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (71.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-859736 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-859736 --alsologtostderr -v=3: (1m11.22220056s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (71.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-922654 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [472b4bd8-73b7-4738-9fa3-1c65e497f300] Pending
helpers_test.go:352: "busybox" [472b4bd8-73b7-4738-9fa3-1c65e497f300] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [472b4bd8-73b7-4738-9fa3-1c65e497f300] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004822079s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-922654 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-922654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 15:33:39.306503 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.312969 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.324432 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.345903 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.387379 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.468922 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:39.631257 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-922654 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-922654 --alsologtostderr -v=3
E1018 15:33:39.953099 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:40.595513 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-922654 --alsologtostderr -v=3: (1m28.597960174s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-161412 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ac7b6d0f-89e6-4c15-972b-414387eb5475] Pending
E1018 15:33:41.877734 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [ac7b6d0f-89e6-4c15-972b-414387eb5475] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ac7b6d0f-89e6-4c15-972b-414387eb5475] Running
E1018 15:33:44.439776 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004635926s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-161412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-161412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1018 15:33:49.561505 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-161412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-161412 --alsologtostderr -v=3
E1018 15:33:57.947320 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:57.953758 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:57.965181 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:57.986628 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:58.028286 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:58.109781 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:58.271460 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:58.593590 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:59.235781 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:33:59.803019 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:00.517274 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:03.079143 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:08.201576 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-161412 --alsologtostderr -v=3: (1m28.166146689s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-859736 -n embed-certs-859736
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-859736 -n embed-certs-859736: exit status 7 (78.42683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-859736 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-859736 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-859736 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (47.631667914s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-859736 -n embed-certs-859736
E1018 15:35:01.246618 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681355 -n old-k8s-version-681355
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681355 -n old-k8s-version-681355: exit status 7 (77.825884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-681355 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-681355 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1018 15:34:18.443802 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:20.284732 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:24.564570 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/functional-900196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.813802 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.820266 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.831742 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.853154 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.894637 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:28.976178 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:29.137731 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:29.459145 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:30.101426 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:31.383152 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:33.944874 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:38.925909 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:39.066454 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:34:49.308621 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-681355 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m0.283788679s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681355 -n old-k8s-version-681355
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mhq4c" [9850af85-e93c-4605-b46b-55841dcf03e3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mhq4c" [9850af85-e93c-4605-b46b-55841dcf03e3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004767113s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-922654 -n no-preload-922654
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-922654 -n no-preload-922654: exit status 7 (79.825852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-922654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-922654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-922654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m1.159481439s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-922654 -n no-preload-922654
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mhq4c" [9850af85-e93c-4605-b46b-55841dcf03e3] Running
E1018 15:35:09.790214 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008621609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-859736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-859736 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-859736 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-859736 -n embed-certs-859736
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-859736 -n embed-certs-859736: exit status 2 (294.602067ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-859736 -n embed-certs-859736
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-859736 -n embed-certs-859736: exit status 2 (293.764048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-859736 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-859736 -n embed-certs-859736
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-859736 -n embed-certs-859736
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt5r7" [6d906af7-8644-4f81-b985-c899a82c937a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt5r7" [6d906af7-8644-4f81-b985-c899a82c937a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.007261832s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412: exit status 7 (96.892816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-161412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-161412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-161412 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (55.686088784s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-720139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 15:35:19.887869 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-720139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m15.695383955s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt5r7" [6d906af7-8644-4f81-b985-c899a82c937a] Running
E1018 15:35:28.326008 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.332583 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.344102 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.365604 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.407286 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.489630 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.651602 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:28.973481 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:29.615280 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.156442 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.162945 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.174459 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.195966 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.237491 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.319063 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.480672 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.802736 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:30.897661 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:31.445081 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:32.727447 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:35:33.459974 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004848052s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-681355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-681355 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-681355 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681355 -n old-k8s-version-681355
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681355 -n old-k8s-version-681355: exit status 2 (289.779723ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681355 -n old-k8s-version-681355
E1018 15:35:35.289091 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681355 -n old-k8s-version-681355: exit status 2 (291.685414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-681355 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681355 -n old-k8s-version-681355
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681355 -n old-k8s-version-681355
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2j8hf" [b40b1cfc-c3de-432a-aa38-ec9c221589c7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1018 15:36:11.135461 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2j8hf" [b40b1cfc-c3de-432a-aa38-ec9c221589c7] Running
E1018 15:36:11.380944 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005867538s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xswlv" [1744b6b3-76ad-49b8-bf8d-a8891ac17197] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1018 15:36:16.502537 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xswlv" [1744b6b3-76ad-49b8-bf8d-a8891ac17197] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004565019s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2j8hf" [b40b1cfc-c3de-432a-aa38-ec9c221589c7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003886982s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-922654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-922654 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-922654 --alsologtostderr -v=1
E1018 15:36:23.168541 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/auto-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-922654 -n no-preload-922654
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-922654 -n no-preload-922654: exit status 2 (285.276797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-922654 -n no-preload-922654
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-922654 -n no-preload-922654: exit status 2 (283.397651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-922654 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-922654 -n no-preload-922654
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-922654 -n no-preload-922654
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xswlv" [1744b6b3-76ad-49b8-bf8d-a8891ac17197] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006388021s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-161412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-161412 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-161412 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412: exit status 2 (276.690976ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412: exit status 2 (274.994077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-161412 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-161412 -n default-k8s-diff-port-161412
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-720139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-720139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088302268s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-720139 --alsologtostderr -v=3
E1018 15:36:41.809478 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/kindnet-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.743874 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.750293 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.761743 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.783263 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.824747 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:42.906401 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:43.067986 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:43.389813 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:44.031972 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:45.313514 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:47.226096 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-720139 --alsologtostderr -v=3: (10.982106063s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720139 -n newest-cni-720139
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720139 -n newest-cni-720139: exit status 7 (68.941144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-720139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-720139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 15:36:47.875540 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:50.267837 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/custom-flannel-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:52.097644 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/enable-default-cni-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:36:52.997291 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:37:03.239072 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:37:12.674576 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/calico-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-720139 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (35.423693181s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720139 -n newest-cni-720139
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-720139 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-720139 --alsologtostderr -v=1
E1018 15:37:23.720982 1759792 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1755824/.minikube/profiles/bridge-320866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720139 -n newest-cni-720139
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720139 -n newest-cni-720139: exit status 2 (268.349078ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720139 -n newest-cni-720139
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720139 -n newest-cni-720139: exit status 2 (308.751276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-720139 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-720139 --alsologtostderr -v=1: (1.02326888s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720139 -n newest-cni-720139
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720139 -n newest-cni-720139
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.81
267 TestNetworkPlugins/group/cilium 4.16
282 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-891059 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-320866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-320866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-320866"

                                                
                                                
----------------------- debugLogs end: kubenet-320866 [took: 3.643929265s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-320866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-320866
--- SKIP: TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-320866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-320866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-320866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-320866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-320866"

                                                
                                                
----------------------- debugLogs end: cilium-320866 [took: 3.983503289s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-320866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-320866
--- SKIP: TestNetworkPlugins/group/cilium (4.16s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-599966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-599966
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard